用Matlab实现apriori算法关联规则的挖掘程序,完整有详细注解

用Matlab实现apriori算法关联规则的挖掘程序,完整有详细注解,第1张

下面这段是apriori算法中由2频繁项集找k频繁项集的程序,程序中有两个问题:

1、似乎while循环的K永远都是固定的,也就是都是频繁2项集的个数。得到频繁3项集后K的个数不是要变吗?如何体现呢?

2、程序中有两个for的大循环,但是发现结果是只要找到一个频繁3项集第二个for循环就会结束,但是其实还应该有其它的频繁3项集。for循环不是应该无条件执行到参数k结束吗?当时k值是15,可是程序结束的时候i=2,j=3,然后j就不执行4以及一直到k的部分了。是什么原因呢?麻烦高手指点一下。急啊……

while( k>0)

le=length(candidate{1})

num=2

nl=0

for i=1:k-1

for j=i+1:k

x1=candidate{i} %candidate初始值为频繁2项集,这个表示频繁项集的第i项

x2=candidate{j}

c = intersect(x1, x2)

M=0

r=1

nn=0

l1=0

if (length(c)==le-1) & (sum(c==x1(1:le-1))==le-1)

houxuan=union(x1(1:le),x2(le))

%树剪枝,若一个候选项的某个K-1项子集为非频繁,则剪枝掉

sub_set=subset(houxuan)

%生成该候选项的所有K-1项子集

NN=length(sub_set)

%判断这些K-1项自己是否都为频繁的

while(r &M<NN)

M=M+1

r=in(sub_set{M},candidate)

end

if M==NN

nl=nl+1

%候选k项集

cand{nl}=houxuan

%记录每个候选k项集出现的次数

le=length(cand{1})

for i=1:m

s=cand{nl}

x=X(i,:)

if sum(x(s))==le

nn=nn+1

end

end

end

end

%从候选集中找频繁项集

if nn>=th

ll=ll+1

candmid{nl}=cand{nl}

pfxj(nl).element=cand{nl}

pfxj(nl).time=nn

disp('得到的频繁项集为:')

result=(candmid{nl})

disp(result)

end

end

end

end

本机环境

ubuntu 12

hadoop 1.1.2

首先保证hadoop配置成功

1、在Hadoop的解压目录的如下位置可以找到WordCount.java的源文件 src/examples/org/apache/hadoop/examples/WordCount.java

新建一个wordcount的文件夹,将WordCount.java拷贝至dev/wordcount文件夹下

2.编译wordcount.java

3.将生成的class文件打包

4.在wordcount下建立file01 file02两个文件

5.启动hadoop,在hdfs上创建input文件夹,并将两个输入文件上传至input文件夹

[java] view plain copy

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop dfs -ls

ls: Cannot access .: No such file or directory.

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop dfs -mkdir input

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop dfs -ls

Found 1 items

drwxr-xr-x - root supergroup 0 2014-03-04 17:48 /user/root/input

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -put /home/zcf/ 桌面/file01 input

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -put /home/zcf/ 桌面/file02 input

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -ls input

Found 2 items

-rw-r--r-- 1 root supergroup 22 2014-03-04 17:50 /user/root/input/file01

-rw-r--r-- 1 root supergroup 28 2014-03-04 17:50 /user/root/input/file02

6.运行wordcount.jar

[java] view plain copy

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop jar wordcount/wordcount.jar org.apache.hadoop.examples.WordCount input output

14/03/04 17:58:14 INFO input.FileInputFormat: Total input paths to process : 2

14/03/04 17:58:14 INFO util.NativeCodeLoader: Loaded the native-hadoop library

14/03/04 17:58:14 WARN snappy.LoadSnappy: Snappy native library not loaded

14/03/04 17:58:15 INFO mapred.JobClient: Running job: job_201403041744_0001

14/03/04 17:58:16 INFO mapred.JobClient: map 0% reduce 0%

14/03/04 17:58:21 INFO mapred.JobClient: map 50% reduce 0%

14/03/04 17:58:22 INFO mapred.JobClient: map 100% reduce 0%

14/03/04 17:58:29 INFO mapred.JobClient: map 100% reduce 33%

14/03/04 17:58:31 INFO mapred.JobClient: map 100% reduce 100%

14/03/04 17:58:32 INFO mapred.JobClient: Job complete: job_201403041744_0001

14/03/04 17:58:32 INFO mapred.JobClient: Counters: 29

14/03/04 17:58:32 INFO mapred.JobClient: Job Counters

14/03/04 17:58:32 INFO mapred.JobClient: Launched reduce tasks=1

14/03/04 17:58:32 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=8421

14/03/04 17:58:32 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0

14/03/04 17:58:32 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0

14/03/04 17:58:32 INFO mapred.JobClient: Launched map tasks=2

14/03/04 17:58:32 INFO mapred.JobClient: Data-local map tasks=2

14/03/04 17:58:32 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=9155

14/03/04 17:58:32 INFO mapred.JobClient: File Output Format Counters

14/03/04 17:58:32 INFO mapred.JobClient: Bytes Written=41

14/03/04 17:58:32 INFO mapred.JobClient: FileSystemCounters

14/03/04 17:58:32 INFO mapred.JobClient: FILE_BYTES_READ=79

14/03/04 17:58:32 INFO mapred.JobClient: HDFS_BYTES_READ=268

14/03/04 17:58:32 INFO mapred.JobClient: FILE_BYTES_WRITTEN=152857

14/03/04 17:58:32 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=41

14/03/04 17:58:32 INFO mapred.JobClient: File Input Format Counters

14/03/04 17:58:32 INFO mapred.JobClient: Bytes Read=50

14/03/04 17:58:32 INFO mapred.JobClient: Map-Reduce Framework

14/03/04 17:58:32 INFO mapred.JobClient: Map output materialized bytes=85

14/03/04 17:58:32 INFO mapred.JobClient: Map input records=2

14/03/04 17:58:32 INFO mapred.JobClient: Reduce shuffle bytes=85

14/03/04 17:58:32 INFO mapred.JobClient: Spilled Records=12

14/03/04 17:58:32 INFO mapred.JobClient: Map output bytes=82

14/03/04 17:58:32 INFO mapred.JobClient: CPU time spent (ms)=2840

14/03/04 17:58:32 INFO mapred.JobClient: Total committed heap usage (bytes)=306511872

14/03/04 17:58:32 INFO mapred.JobClient: Combine input records=8

14/03/04 17:58:32 INFO mapred.JobClient: SPLIT_RAW_BYTES=218

14/03/04 17:58:32 INFO mapred.JobClient: Reduce input records=6

14/03/04 17:58:32 INFO mapred.JobClient: Reduce input groups=5

14/03/04 17:58:32 INFO mapred.JobClient: Combine output records=6

14/03/04 17:58:32 INFO mapred.JobClient: Physical memory (bytes) snapshot=382898176

14/03/04 17:58:32 INFO mapred.JobClient: Reduce output records=5

14/03/04 17:58:32 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1164251136

14/03/04 17:58:32 INFO mapred.JobClient: Map output records=8

7.查看运行结果

[java] view plain copy

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -ls

Found 2 items

drwxr-xr-x - root supergroup 0 2014-03-04 17:50 /user/root/input

drwxr-xr-x - root supergroup 0 2014-03-04 17:58 /user/root/output

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -ls output

Found 3 items

-rw-r--r-- 1 root supergroup 0 2014-03-04 17:58 /user/root/output/_SUCCESS

drwxr-xr-x - root supergroup 0 2014-03-04 17:58 /user/root/output/_logs

-rw-r--r-- 1 root supergroup 41 2014-03-04 17:58 /user/root/output/part-r-00000

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -cat /output/part-r-00000

cat: File does not exist: /output/part-r-00000

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -cat output/part-r-00000

Bye 1

Goodbye 1

Hadoop 2

Hello 2

World 2

至此,hadoop下的WordCount实例运行结束,如果还想重新运行一遍,这需把hadoop下的output文件夹删除,因为hadoop为了保证结果的正确性,存在输出的文件夹的话,就会报异常,异常如下

[java] view plain copy

ERROR security.UserGroupInformation: PriviledgedActionException as:

root cause:org.apache.hadoop.mapred.FileAlreadyExistsException:

Output directory output already exists

删除hdfs上的output文件

[java] view plain copy

root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -rmr output

Deleted hdfs://localhost:9000/user/root/output


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/11266952.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-05-14
下一篇 2023-05-14

发表评论

登录后才能评论

评论列表(0条)

保存