ExMobi移动应用平台采用移动后端即服务模式(MBaaS),将B/S适配、WEB
SERVICE、DB、API文件等多种数据源的集成统一为Taglib标签库,使用简单的标签和属性设置即可完成复杂业务数据的对接,并支持灵活的拣选模式,轻松将不同数源转换为JSON、XML、HTML、文件等格式以适应不同使用场景。同时支持数据与界面分离机制,不仅节省交互流量,也使代码的重用性和可维护性更高。
如果你想了解的更深入的话,建议你到百度里搜“ExMobi移动应用平台”或者“南京烽火星空移动应用平台”。
import numpy as np11
import pandas as pd11
names=("Balance,Duration,History,Purpose,Credit amount,Savings,Employment,instPercent,sexMarried,Guarantors,Residence duration,Assets,Age,concCredit,Apartment,Credits,Occupation,Dependents,hasPhone,Foreign,lable")split(',')11
data=pdread_csv("Desktop/sunshengyun/data/german/germandata",sep='\s+',names=names)11
datahead()11
Balance
Duration
History
Purpose
Credit amount
Savings
Employment
instPercent
sexMarried
Guarantors
…
Assets
Age
concCredit
Apartment
Credits
Occupation
Dependents
hasPhone
Foreign
lable
0
A11 6 A34 A43 1169 A65 A75 4 A93 A101 … A121 67 A143 A152 2 A173 1 A192 A201 1
1
A12 48 A32 A43 5951 A61 A73 2 A92 A101 … A121 22 A143 A152 1 A173 1 A191 A201 2
2
A14 12 A34 A46 2096 A61 A74 2 A93 A101 … A121 49 A143 A152 1 A172 2 A191 A201 1
3
A11 42 A32 A42 7882 A61 A74 2 A93 A103 … A122 45 A143 A153 1 A173 2 A191 A201 1
4
A11 24 A33 A40 4870 A61 A73 3 A93 A101 … A124 53 A143 A153 2 A173 2 A191 A201 2
5 rows × 21 columns
dataBalanceunique()11
array([‘A11’, ‘A12’, ‘A14’, ‘A13’], dtype=object)datacount()11
Balance 1000 Duration 1000 History 1000 Purpose 1000 Credit amount 1000 Savings 1000 Employment 1000 instPercent 1000 sexMarried 1000 Guarantors 1000 Residence duration 1000 Assets 1000 Age 1000 concCredit 1000 Apartment 1000 Credits 1000 Occupation 1000 Dependents 1000 hasPhone 1000 Foreign 1000 lable 1000 dtype: int64#部分变量描述性统计分析
datadescribe()1212
Duration
Credit amount
instPercent
Residence duration
Age
Credits
Dependents
lable
count
1000000000 1000000000 1000000000 1000000000 1000000000 1000000000 1000000000 1000000000
mean
20903000 3271258000 2973000 2845000 35546000 1407000 1155000 1300000
std
12058814 2822736876 1118715 1103718 11375469 0577654 0362086 0458487
min
4000000 250000000 1000000 1000000 19000000 1000000 1000000 1000000
25%
12000000 1365500000 2000000 2000000 27000000 1000000 1000000 1000000
50%
18000000 2319500000 3000000 3000000 33000000 1000000 1000000 1000000
75%
24000000 3972250000 4000000 4000000 42000000 2000000 1000000 2000000
max
72000000 18424000000 4000000 4000000 75000000 4000000 2000000 2000000
dataDurationunique()11
array([ 6, 48, 12, 42, 24, 36, 30, 15, 9, 10, 7, 60, 18, 45, 11, 27, 8, 54, 20, 14, 33, 21, 16, 4, 47, 13, 22, 39, 28, 5, 26, 72, 40], dtype=int64)dataHistoryunique()11
array([‘A34’, ‘A32’, ‘A33’, ‘A30’, ‘A31’], dtype=object)datagroupby('Balance')size()order(ascending=False)11
c:\python27\lib\site-packages\ipykernel\__main__py:1: FutureWarning: order is deprecated, use sort_values(…) if __name__ == ‘__main__’: Balance A14 394 A11 274 A12 269 A13 63 dtype: int64datagroupby('Purpose')size()order(ascending=False)11
c:\python27\lib\site-packages\ipykernel\__main__py:1: FutureWarning: order is deprecated, use sort_values(…) if __name__ == ‘__main__’: Purpose A43 280 A40 234 A42 181 A41 103 A49 97 A46 50 A45 22 A44 12 A410 12 A48 9 dtype: int64datagroupby('Apartment')size()order(ascending=False)11
c:\python27\lib\site-packages\ipykernel\__main__py:1: FutureWarning: order is deprecated, use sort_values(…) if __name__ == ‘__main__’: Apartment A152 713 A151 179 A153 108 dtype: int64import matplotlibpyplot as plt
%matplotlib inline
dataplot(x='lable', y='Age', kind='scatter',
alpha=002, s=50);12341234
![png](output_13_0png)datahist('Age', bins=15);11
![png](output_14_0png)target=datalable11
features_data=datadrop('lable',axis=1)11
numeric_features = [c for c in features_data if features_data[c]dtypekind in ('i', 'f')] # 提取数值类型为整数或浮点数的变量11
numeric_features11
[‘Duration’, ‘Credit amount’, ‘instPercent’, ‘Residence duration’, ‘Age’, ‘Credits’, ‘Dependents’]numeric_data = features_data[numeric_features]11
numeric_datahead()11
Duration
Credit amount
instPercent
Residence duration
Age
Credits
Dependents
0
6 1169 4 4 67 2 1
1
48 5951 2 2 22 1 1
2
12 2096 2 3 49 1 2
3
42 7882 2 4 45 1 2
4
24 4870 3 4 53 2 2
categorical_data = features_datadrop(numeric_features, axis=1)11
categorical_datahead()11
Balance
History
Purpose
Savings
Employment
sexMarried
Guarantors
Assets
concCredit
Apartment
Occupation
hasPhone
Foreign
0
A11 A34 A43 A65 A75 A93 A101 A121 A143 A152 A173 A192 A201
1
A12 A32 A43 A61 A73 A92 A101 A121 A143 A152 A173 A191 A201
2
A14 A34 A46 A61 A74 A93 A101 A121 A143 A152 A172 A191 A201
3
A11 A32 A42 A61 A74 A93 A103 A122 A143 A153 A173 A191 A201
4
A11 A33 A40 A61 A73 A93 A101 A124 A143 A153 A173 A191 A201
categorical_data_encoded = categorical_dataapply(lambda x: pdfactorize(x)[0]) # pdfactorize即可将分类变量转换为数值表示
# apply运算将转换函数应用到每一个变量维度
categorical_data_encodedhead(5)123123
Balance
History
Purpose
Savings
Employment
sexMarried
Guarantors
Assets
concCredit
Apartment
Occupation
hasPhone
Foreign
0
0 0 0 0 0 0 0 0 0 0 0 0 0
1
1 1 0 1 1 1 0 0 0 0 0 1 0
2
2 0 1 1 2 0 0 0 0 0 1 1 0
3
0 1 2 1 2 0 1 1 0 1 0 1 0
4
0 2 3 1 1 0 0 2 0 1 0 1 0
features = pdconcat([numeric_data, categorical_data_encoded], axis=1)#进行数据的合并
featureshead()
# 此处也可以选用one-hot编码来表示分类变量,相应的程序如下:
# features = pdget_dummies(features_data)
# featureshead()1234512345
Duration
Credit amount
instPercent
Residence duration
Age
Credits
Dependents
Balance
History
Purpose
Savings
Employment
sexMarried
Guarantors
Assets
concCredit
Apartment
Occupation
hasPhone
Foreign
0
6 1169 4 4 67 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0
1
48 5951 2 2 22 1 1 1 1 0 1 1 1 0 0 0 0 0 1 0
2
12 2096 2 3 49 1 2 2 0 1 1 2 0 0 0 0 0 1 1 0
3
42 7882 2 4 45 1 2 0 1 2 1 2 0 1 1 0 1 0 1 0
4
24 4870 3 4 53 2 2 0 2 3 1 1 0 0 2 0 1 0 1 0
X = featuresvaluesastype(npfloat32) # 转换数据类型
y = (targetvalues == 1)astype(npint32) # 1:good,2:bad1212
from sklearncross_validation import train_test_split # sklearn库中train_test_split函数可实现该划分
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=02, random_state=0) # 参数test_size设置训练集占比
1234512345
from sklearntree import DecisionTreeClassifier
from sklearncross_validation import cross_val_score
clf = DecisionTreeClassifier(max_depth=8) # 参数max_depth设置树最大深度
# 交叉验证,评价分类器性能,此处选择的评分标准是ROC曲线下的AUC值,对应AUC更大的分类器效果更好
scores = cross_val_score(clf, X_train, y_train, cv=3, scoring='roc_auc')
print("ROC AUC Decision Tree: {:4f} +/-{:4f}"format(
npmean(scores), npstd(scores)))123456789123456789
ROC AUC Decision Tree: 06866 +/-00105
#利用learning curve,以样本数为横坐标,训练和交叉验证集上的评分为纵坐标,对不同深度的决策树进行对比(判断是否存在过拟合或欠拟合)
from sklearnlearning_curve import learning_curve
def plot_learning_curve(estimator, X, y, ylim=(0, 11), cv=3,
n_jobs=1, train_sizes=nplinspace(1, 10, 5),
scoring=None):
plttitle("Learning curves for %s" % type(estimator)__name__)
pltylim(ylim); pltgrid()
pltxlabel("Training examples")
pltylabel("Score")
train_sizes, train_scores, validation_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes,
scoring=scoring)
train_scores_mean = npmean(train_scores, axis=1)
validation_scores_mean = npmean(validation_scores, axis=1)
pltplot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
pltplot(train_sizes, validation_scores_mean, 'o-', color="g",
label="Cross-validation score")
pltlegend(loc="best")
print("Best validation score: {:4f}"format(validation_scores_mean[-1]))12345678910111213141516171819202122231234567891011121314151617181920212223
clf = DecisionTreeClassifier(max_depth=None)
plot_learning_curve(clf, X_train, y_train, scoring='roc_auc')
# 可以注意到训练数据和交叉验证数据的得分有很大的差距,意味着可能过度拟合训练数据了123123
Best validation score: 06310
clf = DecisionTreeClassifier(max_depth=10)
plot_learning_curve(clf, X_train, y_train, scoring='roc_auc')1212
Best validation score: 06565
clf = DecisionTreeClassifier(max_depth=8)
plot_learning_curve(clf, X_train, y_train, scoring='roc_auc')1212
Best validation score: 06762
clf = DecisionTreeClassifier(max_depth=5)
plot_learning_curve(clf, X_train, y_train, scoring='roc_auc')1212
Best validation score: 07219
clf = DecisionTreeClassifier(max_depth=4)
plot_learning_curve(clf, X_train, y_train, scoring='roc_auc')1212
Best validation score: 07226
welearn视听说再打开有记录。从网页上下载到本地的考试客户端是有,后台会检测有没有退出界面,退出三次会警告,检测字典程序。WELearn随行课堂一键自动刷课助手工具是一款电脑上的刷时长的课程学习的平台,在WELearn随行课堂刷时长工具中你可以体验到一系列非常丰富的课程学习工具哦,一分钟自动完成100%。
welearn系统崩溃扫一扫打开异常修复welearm就可以完成登入。
1、打开电脑上的wegame客户端,找到repairexe的应用程序,之后双击该程序,等待修复完成即可。该软件的作用就是将wegame修复成更稳定的上个版本。
2、首先断开目前这台电脑的网络连接,直接断开网线或者是拔掉网线,之后在无网络的情况下输入正确用户名和密码尝试登录,这时候系统会提示没有网络连接,没有关系。之后连接网络尝试登录,就可以拜托闪退的问题。这通常是因为wegame的自动保存账号密码功能,出现了相应的问题。
3、windows键+R键打开运行菜单栏。之后输入%appdata%再按下回车可以直接删除Roaming文件夹下的Tencent文件夹。这个 *** 作只是删除腾讯的部分缓存文件,不会影响正常使用。
4、把杀毒软件影响wegame正常使用。比如qq电脑管家、360安全卫士等等。关闭这类软件,之后重启wegame即可。
以上就是关于ios程序怎么与learncloud实现数据对接全部的内容,包括:ios程序怎么与learncloud实现数据对接、使用python+sklearn的决策树方法预测是否有信用风险、welearn视听说再打开有记录吗等相关内容解答,如果想了解更多相关内容,可以关注我们,你们的支持是我们更新的动力!
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)