在我目前的最佳设计中,我使用了两个邻接列表,一个是赋值列表(大约是Hass图中的边缘),因为我需要保留哪些节点对被显式设置为有序,另一个邻接列表是传递关闭第一个,这样我就可以有效地查询一个节点是否相对于另一个节点进行了排序.现在,每次在赋值邻接列表中添加或删除边时,我都会重新计算传递闭包.
它看起来像这样:
assignment = table('assignment',Metadata,Column('parent',Integer,ForeignKey('node.ID')),Column('child',ForeignKey('node.ID')))closure = table('closure',Column('ancestor',Column('descendent',ForeignKey('node.ID')))class Node(Base): __tablename__ = 'node' ID = Column(Integer,primary_key=True) parents = relationship(Node,secondary=assignment,backref='children',primaryjoin=ID == assignment.c.parent,secondaryjoin=ID == assignment.c.child) ancestors = relationship(Node,secondary=closure,backref='descendents',primaryjoin=ID == closure.c.ancestor,secondaryjoin=ID == closure.c.descendent,vIEwonly=True) @classmethod def recompute_ancestry(cls.conn): conn.execute(closure.delete()) adjacent_values = conn.execute(assignment.select()).fetchall() conn.execute(closure.insert(),floyd_warshall(adjacent_values))
其中floyd_warshall()是同名的算法实现.
这让我有两个问题.第一个是它看起来效率不高,但我不确定我可以使用哪种算法.
第二个更多的是关于每次发生赋值时必须显式调用Node.recompute_ancestry()的实用性,并且只有在将赋值刷新到会话中并且具有正确的连接之后.如果我想看到ORM中反映的更改,我将不得不再次刷新会话.我认为,如果我可以用orm表达重新计算的祖先 *** 作,那将会容易得多.
解决方法 好吧,我去解决了我自己的问题.它的原始部分是将Floyd-Warshall算法应用于父节点的祖先的后代与子节点的后代的祖先的交集,但仅将输出应用于父节点的祖先和孩子的后代.我花了很多时间在上面,我最后发布了流程 on my blog,但这里是代码.from sqlalchemy import *from sqlalchemy.orm import *from sqlalchemy.ext.declarative import declarative_baseBase = declarative_base()association_table = table('edges',Base.Metadata,Column('predecessor',ForeignKey('nodes.ID'),primary_key=True),Column('successor',primary_key=True))path_table = table('paths',primary_key=True))class Node(Base): __tablename__ = 'nodes' ID = Column(Integer,primary_key=True) # extra columns def __repr__(self): return '<Node #%r>' % (self.ID,) successors = relationship('Node',backref='predecessors',secondary=association_table,primaryjoin=ID == association_table.c.predecessor,secondaryjoin=ID == association_table.c.successor) before = relationship('Node',backref='after',secondary=path_table,primaryjoin=ID == path_table.c.predecessor,secondaryjoin=ID == path_table.c.successor) def __lt__(self,other): return other in self.before def add_successor(self,other): if other in self.successors: return self.successors.append(other) self.before.append(other) for descendent in other.before: if descendent not in self.before: self.before.append(descendent) for ancestor in self.after: if ancestor not in other.after: other.after.append(ancestor) def del_successor(self,other): if not self < other: # nodes are not connected,do nothing! return if not other in self.successors: # nodes aren't adjacent,but this *Could* # be a warning... return self.successors.remove(other) # we buld up a set of nodes that will be affected by the removal # we just dID. ancestors = set(other.after) descendents = set(self.before) # we also need to build up a List of nodes that will determine # where the paths may be. basically,we're looking for every # node that is both before some node in the descendents and # ALSO after the ancestors. Such nodes might not be comparable # to self or other,but may still be part of a path between # the nodes in ancestors and the nodes in descendents. ancestors_descendents = set() for ancestor in ancestors: ancestors_descendents.add(ancestor) for descendent in ancestor.before: ancestors_descendents.add(descendent) descendents_ancestors = set() for descendent in descendents: descendents_ancestors.add(descendent) for ancestor in descendent.after: descendents_ancestors.add(ancestor) search_set = ancestors_descendents & descendents_ancestors kNown_good = set() # This is the 'paths' from the # original algorithm. # as before,we need to initialize it with the paths we # kNow are good. this is just the successor edges in # the search set. for predecessor in search_set: for successor in search_set: if successor in predecessor.successors: kNown_good.add((predecessor,successor)) # We Now can work our way through floyd_warshall to resolve # all adjacencIEs: for ancestor in ancestors: for descendent in descendents: if (ancestor,descendent) in kNown_good: # already got this one,so we don't need to look for an # intermediate. continue for intermediate in search_set: if (ancestor,intermediate) in kNown_good \ and (intermediate,descendent) in kNown_good: kNown_good.add((ancestor,descendent)) break # don't need to look any further for an # intermediate,we can move on to the next # descendent. # sift through the bad nodes and update the links for ancestor in ancestors: for descendent in descendents: if descendent in ancestor.before \ and (ancestor,descendent) not in kNown_good: ancestor.before.remove(descendent)总结
以上是内存溢出为你收集整理的python – poset的高效增量实现全部内容,希望文章能够帮你解决python – poset的高效增量实现所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)