在NLP任务中使用glove嵌入时,glove中可能不存在来自数据集的某些单词.因此,我们为这些未知单词实例化随机权重.
是否可以冻结从glove获得的重量,并仅训练新实例化的重量?
我只知道我们可以设置:
model.embedding.weight.requires_grad = False
但这使新单词难以训练.
还是有更好的方法来提取单词的语义.
最佳答案1.将嵌入分为两个单独的对象一种方法是使用两个单独的嵌入,一个用于预训练,另一个用于待训练.
glove应该被冻结,而没有预训练表示的glove应该从可训练层获取.
如果格式化数据以用于预训练的令牌表示,则该数据的范围比不具有glove表示的令牌的范围小.假设您的预训练索引在[0,300]范围内,而没有代表性的索引在[301,500].我会遵循以下思路:
import numpy as npimport torchclass YourNetwork(torch.nn.Module): def __init__(self,glove_embeddings: np.array,how_many_tokens_not_present: int): self.pretrained_embedding = torch.nn.Embedding.from_pretrained(glove_embeddings) self.trainable_embedding = torch.nn.Embedding( how_many_tokens_not_present,glove_embeddings.shape[1] ) # Rest of your network setup def forward(self,batch): # Which tokens in batch do not have representation,should have indices BIGGER # than the pretrained ones,adjust your data creating function accordingly mask = batch > self.pretrained_embedding.shape[0] # You may want to optimize it,you Could probably get away without copy,though # I'm not currently sure how pretrained_batch = batch.copy() pretrained_batch[mask] = 0 embedded_batch = self.pretrained_embedding[pretrained_batch] # Every token without representation has to be brought into appropriate range batch -= self.pretrained_embedding.shape[0] # Zero out the ones which already have pretrained embedding batch[~mask] = 0 non_pretrained_embedded_batch = self.trainable_embedding(batch) # And finally change appropriate tokens from placeholder embedding created by # pretrained into trainable embeddings. embedded_batch[mask] = non_pretrained_embedded_batch[mask] # Rest of your code ...
假设您的预训练索引在[0,500].
2.指定令牌的零梯度.
这有点棘手,但我认为它非常简洁且易于实现.因此,如果获得没有glove表示形式的标记的索引,则可以在反向传播后将它们的梯度显式归零,这样这些行就不会被更新.
import torchembedding = torch.nn.Embedding(10,3)X = torch.LongTensor([[1,2,4,5],[4,3,9]])values = embedding(X)loss = values.mean()# Use whatever loss you wantloss.backward()# Let's say those indices in your embedding are pretrained (have glove representation)indices = torch.LongTensor([2,5])print("Before zeroing out gradIEnt")print(embedding.weight.grad)print("After zeroing out gradIEnt")embedding.weight.grad[indices] = 0print(embedding.weight.grad)
和第二种方法的输出:
Before zeroing out gradIEnttensor([[0.0000,0.0000,0.0000],[0.0417,0.0417,0.0417],[0.0833,0.0833,0.0833],[0.0000,0.0417]])After zeroing out gradIEnttensor([[0.0000,0.0417]])
总结 以上是内存溢出为你收集整理的python-是否可以仅冻结pytorch嵌入层中的某些嵌入权重? 全部内容,希望文章能够帮你解决python-是否可以仅冻结pytorch嵌入层中的某些嵌入权重? 所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)