![用Python十秒做表白神器!虽然520已经过去了,但是还有七夕啊!,第1张 用Python十秒做表白神器!虽然520已经过去了,但是还有七夕啊!,第1张](/aiimages/%E7%94%A8Python%E5%8D%81%E7%A7%92%E5%81%9A%E8%A1%A8%E7%99%BD%E7%A5%9E%E5%99%A8%EF%BC%81%E8%99%BD%E7%84%B6520%E5%B7%B2%E7%BB%8F%E8%BF%87%E5%8E%BB%E4%BA%86%EF%BC%8C%E4%BD%86%E6%98%AF%E8%BF%98%E6%9C%89%E4%B8%83%E5%A4%95%E5%95%8A%EF%BC%81.png)
概述<divstyle=\"margin:0px;padding:0px;border:0px;font-size:14px;line-height:inherit;font-family:helvetica,Arial,\'HiraginoSansGB\',\'MicrosoftYaHei\',simsun;vertical-align:baseline;color:rgb(89,8
<div ><div ><div ><div ><div ><div ><div ><p >520小编也是吃到了一大波狗粮啊,有钱的超级浪漫,没钱的也很会玩!所以小编今天决定还是教大家来做一款表白神器,就算这次用不着没下次也是肯定可以用的着的!
<p >
<p >
<p >
<p >今天,我就来教大家一下,<span >如何用Python做一份特别的礼物送给自己的恋人。<p >
<p >当然了,如果还是单身的,也可以把这个作为<span >表白神器,和心爱的人<span >表白。<p >会Python编程的人当然不用我说,就知道该如何 *** 作,<span >那些不懂编程的人,如果想尝试,那该怎么办呢?<p >
<p >
<p >首先教大家一个初级版的。这个就比较简单,利用Python制作一个爱心。<p >我先把代码给贴出来:<p >import turtle<p >import time<p ># 画爱心的顶部<p >def littleHeart():<p >for i in range (200):<p >turtle.right(1)<p >turtle.forward(2)<p ># 输入表白的语句,默认I love you<p >love=input('Please enter a sentence of love,otherwise the default is "I love you":\n')<p >#输入署名或者赠谁,没有不执行<p >me=input('Please enter pen name,otherwise the default do not execute:\n')<p >if love=='':<p >love='I love you'<p ># 窗口大小<p >turtle.setup(wIDth=900,height=500)<p ># 颜色<p >turtle.color('red','pink')<p ># 笔粗细<p >turtle.pensize(3)<p ># 速度<p >turtle.speed(1)<p ># 提笔<p >turtle.up()<p ># 隐藏笔<p >turtle.hIDeturtle()<p ># 去到的坐标,窗口中心为0,0<p >turtle.goto(0,-180)<p >turtle.showturtle()<p ># 画上线<p >turtle.down()<p >turtle.speed(1)<p >turtle.begin_fill()<p >turtle.left(140)<p >turtle.forward(224)<p >#调用画爱心左边的顶部<p >littleHeart()<p >#调用画爱右边的顶部<p >turtle.left(120)<p >littleHeart()<p ># 画下线<p >turtle.forward(224)<p >turtle.end_fill()<p >turtle.pensize(5)<p >turtle.up()<p >turtle.hIDeturtle()<p ># 在心中写字 一次<p >turtle.goto(0,0)<p >turtle.showturtle()<p >turtle.color('#CD5C5C','pink')<p >#在心中写字 Font可以设置字体自己电脑有的都可以设 align开始写字的位置<p >turtle.write(love,Font=('gungsuh',30,),align="center")<p >turtle.up()<p >turtle.hIDeturtle()<p >time.sleep(2)<p ># 在心中写字 二次<p >turtle.goto(0,0)<p >turtle.showturtle()<p >turtle.color('red','pink')<p >turtle.write(love,align="center")<p >turtle.up()<p >turtle.hIDeturtle()<p ># 写署名<p >if me !='':<p >turtle.color('black','pink')<p >time.sleep(2)<p >turtle.goto(180,-180)<p >turtle.showturtle()<p >turtle.write(me,Font=(20,align="center",move=True)<p >#点击窗口关闭<p >window=turtle.Screen()<p >window.exitonclick()<p >这个代码最终呈现效果如下,这个是比较初级简单的爱心,没有什么高难度。你也可以把代码扩充一下,整的更加高大上一些。<p >
<p >
<p >import turtle<p >import random<p >def love(x,y): # 在(x,y)处画爱心lalala<p >lv = turtle.Turtle()<p >lv.hIDeturtle()<p >lv.up()<p >lv.goto(x,y) # 定位到(x,y)<p >def curvemove(): # 画圆弧<p >for i in range(20):<p >lv.right(10)<p >lv.forward(2)<p >lv.color('red','pink')<p >lv.speed(10000000)<p >lv.pensize(1)<p ># 开始画爱心lalala<p >lv.down()<p >lv.begin_fill()<p >lv.left(140)<p >lv.forward(22)<p >curvemove()<p >lv.left(120)<p >curvemove()<p >lv.forward(22)<p >lv.write("WM",Font=("Arial",12,"normal"),align="center") # 写上表白的人的名字<p >lv.left(140) # 画完复位<p >lv.end_fill()<p >def tree(branchLen,t):<p >if branchLen > 5: # 剩余树枝太少要结束递归<p >if branchLen < 20: # 如果树枝剩余长度较短则变绿<p >t.color("green")<p >t.pensize(random.uniform((branchLen + 5) / 4 - 2,(branchLen + 6) / 4 + 5))<p >t.down()<p >t.forward(branchLen)<p >love(t.xcor(),t.ycor()) # 传输现在turtle的坐标<p >t.up()<p >t.backward(branchLen)<p >t.color("brown")<p >return<p >t.pensize(random.uniform((branchLen + 5) / 4 - 2,(branchLen + 6) / 4 + 5))<p >t.down()<p >t.forward(branchLen)<p ># 以下递归<p >ang = random.uniform(15,45)<p >t.right(ang)<p >tree(branchLen - random.uniform(12,16),t) # 随机决定减小长度<p >t.left(2 * ang)<p >tree(branchLen - random.uniform(12,t) # 随机决定减小长度<p >t.right(ang)<p >t.up()<p >t.backward(branchLen)<p >myWin = turtle.Screen()<p >t = turtle.Turtle()<p >t.hIDeturtle()<p >t.speed(1000)<p >t.left(90)<p >t.up()<p >t.backward(200)<p >t.down()<p >t.color("brown")<p >t.pensize(32)<p >t.forward(60)<p >tree(100,t)<p >myWin.exitonclick()<p >
<p >
<p >
<p >
<p ><span >先第一个:画像重叠。<p >我们先选择两幅画,你们也可以一幅选你们心上人的画像,一幅选择风景或者其他。这个时候就看各位的审美了。这里我选择的都是风景。<p >
<p >
<p >
<p >
<p >再获取图片宽高:<p ># 获取图片的最小宽高<p >wIDth = min(img1.size[0],img2.size[0])<p >height = min(img1.size[1],img2.size[1])<p >img_new = Image.new('RGB',(wIDth,height))<p >这时候渲染图片:<p ># 渲染图片<p >for x in range(wIDth):<p >for y in range(height):<p >r1,g1,b1=img1.getpixel((x,y))<p >r2,g2,b2=img2.getpixel((x,y))<p >r=int(percent1
r1+percent2r2)<p >g=int(percent1
g1+percent2g2)<p >b=int(percent1
b1+percent2b2)<p >img_new.putpixel((x,y),(r,g,b))<p >最后保存就好了!<p ># 保存图片<p >img_new.save('new.jpg')<p >
<p ><span >第二个是图像渲染:<p ><span >通过Python的深度学习算法包去训练计算机模仿世界名画的风格,然后应用到另一幅画中!<p >这个就没有小程序了。因为这个有几百个依赖包。<p >专业难度比较高一些,首先,需要安装使用的模块,pip一键搞定:<p >pip3 install keras<p >pip3 install h5py<p >pip3 install tensorflow<p >TensorFlow的安装可能不翻墙的话下载的比较慢,也可以源码安装。自己把握。(TensorFlow只能python3.5安装,所以先下载一个3.5版本的)<p >然后再下载VGG16模型。把代码生成py格式和需要渲染图片放在同一个文件夹。<p >
<p >我先把代码贴出来(这个代码是根据知乎大佬:杨航锋的代码修改而成):<p >from future import print_function<p >from keras.preprocessing.image import load_img,img_to_array<p >from scipy.misc import imsave<p >import numpy as np<p >import time<p >import argparse<p >from keras.applications import vgg16<p >from keras import backend as K<p >from scipy.optimize.lbfgsb import fmin_l_bfgs_b<p >parser = argparse.ArgumentParser(description='Neural style transfer with Keras.')<p >parser.add_argument('base_image_path',Metavar='base',type=str,help='Path to the image to transform.')<p >parser.add_argument('style_reference_image_path',Metavar='ref',help='Path to the style reference image.')<p >parser.add_argument('result_prefix',Metavar='res_prefix',help='Prefix for the saved results.')<p >parser.add_argument('--iter',type=int,default=15,required=False,help='Number of iterations to run.')<p >parser.add_argument('--content_weight',type=float,default=0.025,help='Content weight.')<p >parser.add_argument('--style_weight',default=1.0,help='Style weight.')<p >parser.add_argument('--tv_weight',help='Total Variation weight.')<p >args = parser.parse_args()<p >base_image_path = args.base_image_path<p >style_reference_image_path = args.style_reference_image_path<p >result_prefix = args.result_prefix<p >iterations = args.iter<p ># 不同损失分量的权重<p >total_variation_weight = args.tv_weight<p >style_weight = args.style_weight<p >content_weight = args.content_weight<p ># 生成图片的尺寸<p >wIDth,height = load_img(base_image_path).size<p >img_nrows = 400<p >img_ncols = int(wIDth * img_nrows / height)<p ># util function to open,调整和格式化图片到适当的张量<p >def preprocess_image(image_path):<p >img = load_img(image_path,target_size=(img_nrows,img_ncols))<p >img = img_to_array(img)<p >img = np.expand_dims(img,axis=0)<p >img = vgg16.preprocess_input(img)<p >return img<p ># util函数将一个张量转换成一个有效的图像<p >def deprocess_image(x):<p >if K.image_data_format() == 'channels_first':<p >x = x.reshape((3,img_nrows,img_ncols))<p >x = x.transpose((1,2,0))<p >else:<p >x = x.reshape((img_nrows,img_ncols,3))<p ># Remove zero-center by mean pixel<p ># 用平均像素去除零中心<p >x[:,:,0] += 103.939<p >x[:,1] += 116.779<p >x[:,2] += 123.68@H
313301@'RGB' 转换<p >x = x[:,::-1]<p >x = np.clip(x,255).astype('uint8')<p >return x<p ># get tensor representations of our images<p ># 得到图像的张量表示<p >base_image = K.variable(preprocess_image(base_image_path))<p >style_reference_image = K.variable(preprocess_image(style_reference_image_path))<p ># this will contain our generated image<p ># 包含我们生成的图片<p >if K.image_data_format() == 'channels_first':<p >combination_image = K.placeholder((1,3,img_ncols))<p >else:<p >combination_image = K.placeholder((1,3))<p ># combine the 3 images into a single Keras tensor<p ># 将3个图像合并成一个Keras张量<p >input_tensor = K.concatenate([base_image,<p >style_reference_image,<p >combination_image],axis=0)<p ># build the VGG16 network with our 3 images as input<p ># the model will be loaded with pre-trained ImageNet weights<p ># 以我们的3个图像作为输入构建VGG16网络<p ># 该模型将加载预先训练的ImageNet权重<p >model = vgg16.VGG16(input_tensor=input_tensor,<p >weights='imagenet',include_top=False)<p >print('Model loaded.')<p ># get the symbolic outputs of each "key" layer (we gave them unique names).<p ># 获取每个“键”层的符号输出(我们给它们取了唯一的名称)<p >outputs_dict = dict([(layer.name,layer.output) for layer in model.layers])<p ># compute the neural style loss<p ># 计算神经类型的损失<p ># first we need to define 4 util functions<p ># 首先我们需要定义是个until函数<p ># the gram matrix of an image tensor (feature-wise outer product)<p ># 图像张量的克矩阵<p >def gram_matrix(x):<p >assert K.ndim(x) == 3<p >if K.image_data_format() == 'channels_first':<p >features = K.batch_flatten(x)<p >else:<p >features = K.batch_flatten(K.permute_dimensions(x,(2,1)))<p >gram = K.dot(features,K.transpose(features))<p >return gram<p ># the "style loss" is designed to maintain<p ># 风格损失”是为了维护而设计的<p ># the style of the reference image in the generated image.<p ># 生成图像中引用图像的样式<p ># It is based on the gram matrices (which capture style) of feature maps from the style reference image and from the generated image<p ># 它基于从样式引用图像和生成的图像中获取特征映射的gram矩阵(捕获样式)<p >def style_loss(style,combination):<p >assert K.ndim(style) == 3<p >assert K.ndim(combination) == 3<p >S = gram_matrix(style)<p >C = gram_matrix(combination)<p >channels = 3<p >size = img_nrows
img_ncols<p >return K.sum(K.square(S - C)) / (4. (channels *
2) (size ** 2))<p ># an auxiliary loss function<p ># 一个辅助的损失函数<p ># designed to maintain the "content" of the base image in the generated image<p >#设计用于维护生成图像中基本图像的“内容<p >def content_loss(base,combination):<p >return K.sum(K.square(combination - base))<p ># the 3rd loss function,total variation loss,designed to keep the generated image locally coherent<p ># 第三个损失函数,总变异损失,设计来保持生成的图像局部一致<p >def total_variation_loss(x):<p >assert K.ndim(x) == 4<p >if K.image_data_format() == 'channels_first':<p >a = K.square(x[:,:img_nrows - 1,:img_ncols - 1] - x[:,1:,:img_ncols - 1])<p >b = K.square(x[:,1:])<p >else:<p >a = K.square(x[:,:img_ncols - 1,:] - x[:,:])<p >b = K.square(x[:,:])<p >return K.sum(K.pow(a + b,1.25))<p ># combine these loss functions into a single scalar<p ># 将这些损失函数合并成一个标量。<p >loss = K.variable(0.)<p >layer_features = outputs_dict['block4_conv2']<p >base_image_features = layer_features[0,:]<p >combination_features = layer_features[2,:]<p >loss += content_weight * content_loss(base_image_features,<p >combination_features)<p >feature_layers = ['block1_conv1','block2_conv1',<p >'block3_conv1','block4_conv1',<p >'block5_conv1']<p >for layer_name in feature_layers:<p >layer_features = outputs
dict[layername]<p >style_reference_features = layer_features[1,:]<p >sl = style_loss(style_reference_features,combination_features)<p >loss += (style_weight / len(feature_layers))
sl<p >loss += total_variation_weight total_variation_loss(combination_image)<p ># get the gradIEnts of the generated image wrt the loss<p ># 得到所生成图像的梯度,并对损失进行wrt。<p >grads = K.gradIEnts(loss,combination_image)<p >outputs = [loss]<p >if isinstance(grads,(List,tuple)):<p >outputs += grads<p >else:<p >outputs.append(grads)<p >f_outputs = K.function([combination_image],outputs)<p >def eval_loss_and_grads(x):<p >if K.image_data_format() == 'channels_first':<p >x = x.reshape((1,img_ncols))<p >else:<p >x = x.reshape((1,3))<p >outs = f_outputs(
)<p >loss_value = outs[0]<p >if len(outs[1:]) == 1:<p >grad_values = outs[1].flatten().astype('float64')<p >else:<p >grad_values = np.array(outs[1:]).flatten().astype('float64')<p >return loss_value,grad_values<p >"""<p >this Evaluator class makes it possible<p >to compute loss and gradIEnts in one pass<p >while retrIEving them via two separate functions,<p >"loss" and "grads". This is done because scipy.optimize<p >requires separate functions for loss and gradIEnts,<p >but computing them separately would be inefficIEnt.<p >这个评估器类使它成为可能。<p >在一个通道中计算损耗和梯度。<p >当通过两个不同的函数检索它们时,<p >“损失”和“梯度”。这是因为scipy.optimize<p >要求分离的函数用于损失和梯度,<p >但是单独计算它们将是低效的<p >"""<p >class Evaluator(object):<p >def init(self):<p >self.loss_value = None<p >self.grads_values = None<p >def loss(self,x):<p >assert self.loss_value is None<p >loss_value,grad_values = eval_loss_and_grads(x)<p >self.loss_value = loss_value<p >self.grad_values = grad_values<p >return self.loss_value<p >def grads(self,x):<p >assert self.loss_value is not None<p >grad_values = np.copy(self.grad_values)<p >self.loss_value = None<p >self.grad_values = None<p >return grad_values<p >evaluator = Evaluator()<p ># run scipy-based optimization (L-BFGS) over the pixels of the generated image<p ># 运行 scipy-based optimization (L-BFGS) 覆盖 生成的图像的像素<p ># so as to minimize the neural style loss<p ># 这样可以减少神经类型的损失<p >if K.image_data_format() == 'channels_first':<p >x = np.random.uniform(0,(1,img_ncols)) - 128.<p >else:<p >x = np.random.uniform(0,3)) - 128.<p >for i in range(iterations):<p >print('Start of iteration',i)<p >start_time = time.time()<p >x,min_val,info = fmin_l_bfgs_b(evaluator.loss,x.flatten(),<p >fprime=evaluator.grads,maxfun=20)<p >print('Current loss value:',min_val)<p ># save current generated image<p >img = deprocess_image(x.copy())<p >fname = result_prefix + '_at_iteration_%d.png' % i<p >imsave(fname,img)<p >end_time = time.time()<p >print('Image saved as',fname)<p >print('Iteration %d completed in %ds' % (i,end_time - start_time))<p >我先把代码贴出来(这个代码是根据知乎大佬:杨航锋的代码修改而成):<p >from future import print_function<p >from keras.preprocessing.image import load_img,end_time - start_time))<p >它会有一个不断渐进渲染的过程:<p >
<p >虽然我有老婆,而且我老婆特别好看,漂亮。但是为了不伤害到你们,我就用万门的新起点嘉园大楼渲染一下莫奈的名画。给你们具体看一下。<p >
<p >
<p >
<p >
<p >
<p >
<p >
<p >其实,只要是自己用心做出的礼物,你喜欢的人一定会非常感动。<p >
<p >愿每一个渴望恋爱的人都能在520这天找到自己的心有所属。<p >
<p >转载于:万门<p >欢迎大家关注我的博客或者公众号:https://home.cnblogs.com/u/Python1234/ Python学习交流 总结
以上是内存溢出为你收集整理的用Python十秒做表白神器!虽然520已经过去了,但是还有七夕啊!全部内容,希望文章能够帮你解决用Python十秒做表白神器!虽然520已经过去了,但是还有七夕啊!所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
评论列表(0条)