使用pinpoint服务微服务链路跟踪

使用pinpoint服务微服务链路跟踪,第1张

使用pinpoint服务微服务链路跟踪 服务依赖
    jdk 1.8hadoop 2.5.1hbase  1.2.6pinpoint 3.3.3windows7系统
hadoop安装

下载Hadoop包

http://archive.apache.org/dist/hadoop/core/hadoop-2.5.1/下载地址2.5.1版本安装包,访问http://archive.apache.org/dist/hadoop/core/hadoop-2.5.1/。

 解压Hadoop包,并添加环境变量

将上面下载好的Hadoop包解压到一个目录,并设置环境变量

HADOOP_HOME=D:hadoop-2.5.1

将该路径"%HADOOP_HOME%bin"添加到系统路径path中

下载window util for hadoop

   下载地址是:https://codeload.github.com/gvreddy1210/bin/zip/master,注意该工具的版本与Hadoop版本的需要兼容,下载完成后解压覆盖到上述路径的bin目录下,例如:D:hadoop-2.5.1bin。

创建DataNode和NameNode

    创建 Data目录和Name目录,用来存储数据,例如:D:hadoop-2.5.1hadoop-2.5.1datadatanode和D:hadoop-2.5.1datanamenode。

修改Hadoop相关的配置文件

   主要修改四个配置文件:core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml,,这四个文件的路径为:D:hadoop-2.5.1etchadoop:

core-site.xml完整内容如下








 
        fs.defaultFS
        hdfs://localhost:9000
    

hdfs-site.xml完整内容如下









    
        dfs.replication
        1
    
    
        dfs.namenode.name.dir
        /D:/hadoop-2.5.1/data/namenode
    
    
        dfs.datanode.data.dir
        /D:/hadoop-2.5.1/data/datanode
    


mapred-site.xml完整内容如下








  
        mapreduce.framework.name
        yarn
    

yarn-site.xml完整内容如下






    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    
    
        yarn.nodemanager.aux-services.mapreduce.shuffle.class
        org.apache.hadoop.mapred.ShuffleHandler
    
    
        yarn.scheduler.minimum-allocation-mb
        1024
    
    
        yarn.nodemanager.resource.memory-mb
        4096
    
    
        yarn.nodemanager.resource.cpu-vcores
        2
    


注意:上面涉及的路径改成你自己的路径。

初始化节点

  进入到hadoopbin目录下,执行命令:hadoop namenode -format

启动Hadoop

  完成上面的初始化工作后,就可以启动Hadoop了,进入到hadoopsbin目录下,执行命令:start-all(关闭命令是 stop-all)

启动成功后访问:http://localhost:50070 查看是否成功

hbase 安装

下载Hbase包

  下载地址:Index of /dist/hbase/1.2.6

 解压并修改配置

修改conf/hbase-env.cmd、conf/hbase-site.xml配置文件

hbase-env.cmd修改其中的set JAVA_HOME=D:javajdk1.8.0_162。完整内容如下:

@rem

@rem Set environment variables here.

@rem The java implementation to use.  Java 1.7+ required.
set JAVA_HOME=D:javajdk1.8.0_162

@rem Extra Java CLASSPATH elements.  Optional.
@rem set Hbase_CLASSPATH=

@rem The maximum amount of heap to use. Default is left to JVM default.
@rem set Hbase_HEAPSIZE=1000

@rem Uncomment below if you intend to use off heap cache. For example, to allocate 8G of 
@rem offheap, set the value to "8G".
@rem set Hbase_OFFHEAPSIZE=1000

@rem For example, to allocate 8G of offheap, to 8G:
@rem etHbase_OFFHEAPSIZE=8G

@rem Extra Java runtime options.
@rem Below are what we set by default.  May only work with SUN JVM.
@rem For more on why as well as other possible settings,
@rem see http://wiki.apache.org/hadoop/PerformanceTuning
@rem JDK6 on Windows has a known bug for IPv6, use preferIPv4Stack unless JDK7.
@rem @rem See TestIPv6NIOServerSocketChannel.
set Hbase_OPTS="-XX:+UseConcMarkSweepGC" "-Djava.net.preferIPv4Stack=true"

@rem Configure PermSize. only needed in JDK7. You can safely remove it for JDK8+
set Hbase_MASTER_OPTS=%Hbase_MASTER_OPTS% "-XX:PermSize=128m" "-XX:MaxPermSize=128m"
set Hbase_REGIONSERVER_OPTS=%Hbase_REGIONSERVER_OPTS% "-XX:PermSize=128m" "-XX:MaxPermSize=128m"

@rem Uncomment below to enable java garbage collection logging for the server-side processes
@rem this enables basic gc logging for the server processes to the .out file
@rem set SERVER_GC_OPTS="-verbose:gc" "-XX:+PrintGCDetails" "-XX:+PrintGCDateStamps" %Hbase_GC_OPTS%

@rem this enables gc logging using automatic GC log rolling. only applies to jdk 1.6.0_34+ and 1.7.0_2+. Either use this set of options or the one above
@rem set SERVER_GC_OPTS="-verbose:gc" "-XX:+PrintGCDetails" "-XX:+PrintGCDateStamps" "-XX:+UseGCLogFileRotation" "-XX:NumberOfGCLogFiles=1" "-XX:GCLogFileSize=512M" %Hbase_GC_OPTS%

@rem Uncomment below to enable java garbage collection logging for the client processes in the .out file.
@rem set CLIENT_GC_OPTS="-verbose:gc" "-XX:+PrintGCDetails" "-XX:+PrintGCDateStamps" %Hbase_GC_OPTS%

@rem Uncomment below (along with above GC logging) to put GC information in its own logfile (will set Hbase_GC_OPTS)
@rem set Hbase_USE_GC_LOGFILE=true

@rem Uncomment and adjust to enable JMX exporting
@rem See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
@rem More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
@rem
@rem set Hbase_JMX_base="-Dcom.sun.management.jmxremote.ssl=false" "-Dcom.sun.management.jmxremote.authenticate=false"
@rem set Hbase_MASTER_OPTS=%Hbase_JMX_base% "-Dcom.sun.management.jmxremote.port=10101"
@rem set Hbase_REGIONSERVER_OPTS=%Hbase_JMX_base% "-Dcom.sun.management.jmxremote.port=10102"
@rem set Hbase_THRIFT_OPTS=%Hbase_JMX_base% "-Dcom.sun.management.jmxremote.port=10103"
@rem set Hbase_ZOOKEEPER_OPTS=%Hbase_JMX_base% -Dcom.sun.management.jmxremote.port=10104"

@rem File naming hosts on which HRegionServers will run.  $Hbase_HOME/conf/regionservers by default.
@rem set Hbase_REGIonSERVERS=%Hbase_HOME%confregionservers

@rem Where log files are stored.  $Hbase_HOME/logs by default.
@rem set Hbase_LOG_DIR=%Hbase_HOME%logs

@rem A string representing this instance of hbase. $USER by default.
@rem set Hbase_IDENT_STRING=%USERNAME%

@rem Seconds to sleep between slave commands.  Unset by default.  This
@rem can be useful in large clusters, where, e.g., slave rsyncs can
@rem otherwise arrive faster than the master can service them.
@rem set Hbase_SLAVE_SLEEP=0.1

@rem Tell Hbase whether it should manage it's own instance of Zookeeper or not.
@rem set Hbase_MANAGES_ZK=true

hbase-site.xml完整内容如下





     
        hbase.master 
        localhost 
     
    
      
        hbase.rootdir  
        hdfs://localhost:9000/hbase  
    
      
        hbase.zookeeper.quorum  
        localhost  
      
    
       
        hbase.master.info.port  
        60000  
     
      
        hbase.cluster.distributed  
        false  
    

启动Hbase

   进入到hbasebin目录下,执行命令:start-hbase

启动完成后可以在http://localhost:60000/master-status浏览主节点信息

pinpoint 安装

初始化脚本

下载脚本,脚本下载地址:https://github.com/pinpoint-apm/pinpoint/tree/2.3.x/hbase/scripts

切换到上面安装的hbase的bin目录。执行如下代码

hbase shell d:/hbase-create.hbase

 下载和启动服务

下载地址:https://github.com/pinpoint-apm/pinpoint/releases下载pinpoint-collector、pinpoint-web、pinpoint-agent。

启动pinpoint-collector

java -jar -Dpinpoint.zookeeper.address=127.0.0.1  pinpoint-collector-boot-2.3.3.jar

启动pinpoint-web

java -jar -Dpinpoint.zookeeper.address=127.0.0.1 pinpoint-web-boot-2.3.3.jar

启动完成后,访问http://127.0.0.1:8080/即可看到pinpoint界面

启动代理

解压pinpoint-agent压缩包,并在服务启动增加启动参数,例如

java -jar -javaagent:pinpoint-agent-2.3.3/pinpoint-bootstrap.jar -Dpinpoint.agentId=test-agent -Dpinpoint.applicationName=TESTAPP pinpoint-quickstart-testapp-2.3.3.jar
应用程序打印出pinpoint对应的txid

应用程序logback配置文件增加PtxId和PspanId打印

修改前


修改后

agent 配置修改

修改D:pinpoint-agent-2.3.3profilesrelease下的pinpoint.config配置文件,修改内容如下:

profiler.sampling.rate=1
# 如果是用logback修改这个为true,如果是其他的修改对应的profiler.xxx.logging.transactioninfo
profiler.logback.logging.transactioninfo=true

修改完成后,重启服务即可打印出对应的txid.

 

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5708157.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存