spark2.0源码阅读 剖析spark RPC框架

spark2.0源码阅读 剖析spark RPC框架,第1张

文章目录
    • 与老版本区别
    • RPC框架架构
    • 组件详细功能说明
      • TransportContext
        • TransportConf
        • TransportClientFactory
          • TransportClientBootstrap
          • 创建客户端TransportClient
          • TransportClient
        • TransportServer
          • TransportChannelHandler
          • ManagedBuffer
          • TransportServerBootstrap
        • RpcHandler

与老版本区别

在Spark 0.x.x与Spark 1.x.x版本中,组件间的消息通信主要借助于Akka。但是Akka在Spark 2.0.0版本中被移除了

在Spark 1.x.x版本中,用户文件与Jar包的上传采用了由Jetty实现的HttpFileServer,但在Spark 2.0.0版本中也被废弃了,现在使用的是基于Spark内置RPC框架的NettyStreamManager。

RPC框架架构

TransportContext内部包含传输上下文的配置信息TransportConf和对客户端请求消息进行处理的RpcHandler。

TransportConf在创建TransportClientFactory和TransportServer时都是必须的

TransportClientFactory是RPC客户端的工厂类。

TransportServer是RPC服务端的实现。


组件详细功能说明

TransportContext:传输上下文,包含了用于创建传输服务端(TransportServer)和传输客户端工厂(TransportClientFactory)的上下文信息,并支持使用TransportChannelHandler设置Netty提供的SocketChannel的Pipeline的实现。
TransportConf:传输上下文的配置信息。
RpcHandler:对调用传输客户端(TransportClient)的sendRPC方法发送的消息进行处理的程序。
MessageEncoder:在将消息放入管道前,先对消息内容进行编码,防止管道另一端读取时丢包和解析错误。
MessageDecoder:对从管道中读取的ByteBuf进行解析,防止丢包和解析错误;
TransportFrameDecoder:对从管道中读取的ByteBuf按照数据帧进行解析;
RpcResponseCallback:RpcHandler对请求的消息处理完毕后,进行回调的接口。
TransportClientFactory:创建传输客户端(TransportClient)的传输客户端工厂类。
ClientPool:在两个对等节点间维护的关于传输客户端(TransportClient)的池子。ClientPool是TransportClientFactory的内部组件。
TransportClient:RPC框架的客户端,用于获取预先协商好的流中的连续块。TransportClient旨在允许有效传输大量数据,这些数据将被拆分成几百KB到几MB的块。当TransportClient处理从流中获取的获取的块时,实际的设置是在传输层之外完成的。sendRPC方法能够在客户端和服务端的同一水平线的通信进行这些设置。
TransportClientBootstrap:当服务端响应客户端连接时在客户端执行一次的引导程序。
TransportRequestHandler:用于处理客户端的请求并在写完块数据后返回的处理程序。
TransportResponseHandler:用于处理服务端的响应,并且对发出请求的客户端进行响应的处理程序。
TransportChannelHandler:代理由TransportRequestHandler处理的请求和由TransportResponseHandler处理的响应,并加入传输层的处理。
TransportServerBootstrap:当客户端连接到服务端时在服务端执行一次的引导程序。
TransportServer:RPC框架的服务端,提供高效的、低级别的流服务。

TransportContext

先上UML图

可用看到成员变量比较重要的TransportConf和rpcHandler都在里面,此外MessageEncoder,MessageDecoder也能找到。

然后比较重要的方法是createServer和createClientFactory创建服务端和客户端

public class TransportContext {
  private static final Logger logger = LoggerFactory.getLogger(TransportContext.class);

  private final TransportConf conf;
  private final RpcHandler rpcHandler;
  private final boolean closeIdleConnections;
  private static final MessageEncoder ENCODER = MessageEncoder.INSTANCE;
  private static final MessageDecoder DECODER = MessageDecoder.INSTANCE;
TransportConf
public class TransportConf {

  private final ConfigProvider conf;

  private final String module;

真实的配置由ConfigProvider提供,ConfigProvider是一个抽象类

public abstract class ConfigProvider {

  public abstract String get(String name);

  public abstract Iterable<Map.Entry<String, String>> getAll();

  public String get(String name, String defaultValue) {
    try {
      return get(name);
    } catch (NoSuchElementException e) {
      return defaultValue;
    }
  }

Spark通常使用SparkTransportConf创建TransportConf

object SparkTransportConf {
  private val MAX_DEFAULT_NETTY_THREADS = 8
  def fromSparkConf(_conf: SparkConf, module: String, numUsableCores: Int = 0): TransportConf = {
    val conf = _conf.clone
    val numThreads = defaultNumThreads(numUsableCores)
    conf.setIfMissing(s"spark.$module.io.serverThreads", numThreads.toString)
    conf.setIfMissing(s"spark.$module.io.clientThreads", numThreads.toString)

    new TransportConf(module, new ConfigProvider {
      override def get(name: String): String = conf.get(name)
      override def get(name: String, defaultValue: String): String = conf.get(name, defaultValue)
      override def getAll(): java.lang.Iterable[java.util.Map.Entry[String, String]] = {
        conf.getAll.toMap.asJava.entrySet()
      }
    })
  }
  private def defaultNumThreads(numUsableCores: Int): Int = {
      //如果numUsableCores小于等于0,那么线程数是系统可用处理器的数量,不过系统的内核数不可能全部用于网络传输使用,所以这里还将分配给网络传输的内核数量最多限制在8个
    val availableCores =
      if (numUsableCores > 0) numUsableCores else Runtime.getRuntime.availableProcessors()
    math.min(availableCores, MAX_DEFAULT_NETTY_THREADS)
  }
}

可用看到其实也是调用了,之前说到底sparkConf的clone方法,拷贝spark属性,并且实现了ConfigProvider的几个抽象方法。

TransportClientFactory
public TransportClientFactory createClientFactory(List<TransportClientBootstrap> bootstraps) {
  return new TransportClientFactory(this, bootstraps);
}

public TransportClientFactory createClientFactory() {
  return createClientFactory(new ArrayList<>());
}

TransportContext创建TransportClientFactory的方法

private final TransportContext context;
private final TransportConf conf;
private final List<TransportClientBootstrap> clientBootstraps;
//一个socketAddress对应一个ClientPool里面包含多个TransportClient
private final ConcurrentHashMap<SocketAddress, ClientPool> connectionPool;

/** Random number generator for picking connections between peers. */
private final Random rand;
private final int numConnectionsPerPeer;

private final Class<? extends Channel> socketChannelClass;
private EventLoopGroup workerGroup;
private PooledByteBufAllocator pooledAllocator;
private final NettyMemoryMetrics metrics;

public TransportClientFactory(
    TransportContext context,
    List<TransportClientBootstrap> clientBootstraps) {
  this.context = Preconditions.checkNotNull(context);
  this.conf = context.getConf();
  this.clientBootstraps = Lists.newArrayList(Preconditions.checkNotNull(clientBootstraps));
  this.connectionPool = new ConcurrentHashMap<>();
  this.numConnectionsPerPeer = conf.numConnectionsPerPeer();
  this.rand = new Random();

  IOMode ioMode = IOMode.valueOf(conf.ioMode());
  this.socketChannelClass = NettyUtils.getClientChannelClass(ioMode);
  this.workerGroup = NettyUtils.createEventLoop(
      ioMode,
      conf.clientThreads(),
      conf.getModuleName() + "-client");
  this.pooledAllocator = NettyUtils.createPooledByteBufAllocator(
    conf.preferDirectBufs(), false, conf.clientThreads());
  this.metrics = new NettyMemoryMetrics(
    this.pooledAllocator, conf.getModuleName() + "-client", conf);
}

TransportClientFactory的构造方法

相关变量解释:

clientBootstraps:即参数传递的TransportClientBootstrap列表;

connectionPool:即针对每个Socket地址的连接池ClientPool的缓存;connectionPool的数据结构较为复杂

numConnectionsPerPeer:即从TransportConf获取的key为”spark.+模块名+.io.numConnectionsPerPeer”的属性值。此属性值用于指定对等节点间的连接数。这里的模块名实际为TransportConf的module字段,Spark的很多组件都利用RPC框架构建,它们之间按照模块名区分,例如RPC模块的key为“spark.rpc.io.numConnectionsPerPeer”;
rand:对Socket地址对应的连接池ClientPool中缓存的TransportClient进行随机选择,对每个连接做负载均衡;
ioMode:IO模式,即从TransportConf获取key为”spark.+模块名+.io.mode”的属性值。默认值为NIO,Spark还支持EPOLL;
socketChannelClass:客户端Channel被创建时使用的类,通过ioMode来匹配,默认为NioSocketChannel,Spark还支持EpollEventLoopGroup;
workerGroup:根据Netty的规范,客户端只有worker组,所以此处创建workerGroup。workerGroup的实际类型是NioEventLoopGroup;
pooledAllocator :汇集ByteBuf但对本地线程缓存禁用的分配器。

public static EventLoopGroup createEventLoop(IOMode mode, int numThreads, String threadPrefix) {
  ThreadFactory threadFactory = createThreadFactory(threadPrefix);

  switch (mode) {
    case NIO:
      return new NioEventLoopGroup(numThreads, threadFactory);
    case EPOLL:
      return new EpollEventLoopGroup(numThreads, threadFactory);
    default:
      throw new IllegalArgumentException("Unknown io mode: " + mode);
  }
}

NettySocketChannel性能上从低到高如下:

  • OioSocketChannel:传统,阻塞式编程。
  • NioSocketChannel:select/poll或者epoll,jdk 7之后linux下会自动选择epoll。
  • EpollSocketChannel:epoll,仅限linux,提供更多额外选项。
  • EpollDomainSocketChannel:ipc模式,仅限客户端、服务端在相同主机的情况,从4.0.26版本开始支持
TransportClientBootstrap
public interface TransportClientBootstrap {
  void doBootstrap(TransportClient client, Channel channel) throws RuntimeException;
}

TransportClientBootstrap是在TransportClient上执行的客户端引导程序,主要对连接建立时进行一些初始化的准备(例如验证、加密)。TransportClientBootstrap所作的 *** 作往往是昂贵的,好在建立的连接可以重用。

三个实现类,我们以EncryptionDisablerBootstrap为例看看具体做了啥

private static class EncryptionDisablerBootstrap implements TransportClientBootstrap {

  @Override
  public void doBootstrap(TransportClient client, Channel channel) {
    channel.pipeline().remove(SaslEncryption.ENCRYPTION_HANDLER_NAME);
  }

}

作用是移除客户端管道中的SASL加密

创建客户端TransportClient
public class TransportClientFactory implements Closeable {

  /** A simple data structure to track the pool of clients between two peer nodes. */
  private static class ClientPool {
    TransportClient[] clients;
    Object[] locks;

    ClientPool(int size) {
      clients = new TransportClient[size];
      locks = new Object[size];
      for (int i = 0; i < size; i++) {
        locks[i] = new Object();
      }
    }
  }

ClientPool实际是由TransportClient的数组构成,而locks数组中的Object与clients数组中的TransportClient按照数组索引一一对应,通过对每个TransportClient分别采用不同的锁,降低并发情况下线程间对锁的争用,进而减少阻塞,提高并发度。

  public TransportClient createClient(String remoteHost, int remotePort)
      throws IOException, InterruptedException {
    final InetSocketAddress unresolvedAddress =
      InetSocketAddress.createUnresolved(remoteHost, remotePort);

    // Create the ClientPool if we don't have it yet.
    //判断是否存在了指定端口的客户端池
    ClientPool clientPool = connectionPool.get(unresolvedAddress);
    if (clientPool == null) {
      //numConnectionsPerPeer是transportContext中拿过来的,还记得默认是取系统cpu和8较小者
      //connectionPool
      connectionPool.putIfAbsent(unresolvedAddress, new ClientPool(numConnectionsPerPeer));
      clientPool = connectionPool.get(unresolvedAddress);
    }

    int clientIndex = rand.nextInt(numConnectionsPerPeer);
    TransportClient cachedClient = clientPool.clients[clientIndex];

    //判断client是否初始化了,且处于激活状态,更新handler访问时间并返回TransportClient
    if (cachedClient != null && cachedClient.isActive()) {
      TransportChannelHandler handler = cachedClient.getChannel().pipeline()
        .get(TransportChannelHandler.class);
      synchronized (handler) {
        handler.getResponseHandler().updateTimeOfLastRequest();
      }

      if (cachedClient.isActive()) {
        logger.trace("Returning cached connection to {}: {}",
          cachedClient.getSocketAddress(), cachedClient);
        return cachedClient;
      }
    }

    final long preResolveHost = System.nanoTime();
    final InetSocketAddress resolvedAddress = new InetSocketAddress(remoteHost, remotePort);
    final long hostResolveTimeMs = (System.nanoTime() - preResolveHost) / 1000000;
    if (hostResolveTimeMs > 2000) {
      logger.warn("DNS resolution for {} took {} ms", resolvedAddress, hostResolveTimeMs);
    } else {
      logger.trace("DNS resolution for {} took {} ms", resolvedAddress, hostResolveTimeMs);
    }

    //加锁
    synchronized (clientPool.locks[clientIndex]) {
      cachedClient = clientPool.clients[clientIndex];
      //双重验证
      if (cachedClient != null) {
        if (cachedClient.isActive()) {
          logger.trace("Returning cached connection to {}: {}", resolvedAddress, cachedClient);
          return cachedClient;
        } else {
          logger.info("Found inactive connection to {}, creating a new one.", resolvedAddress);
        }
      }
      //最后调用createClient(resolvedAddress)创建TransportClient
      clientPool.clients[clientIndex] = createClient(resolvedAddress);
      return clientPool.clients[clientIndex];
    }
  }

createClient(String remoteHost, int remotePort)方法主要做的事情是通过unresolvedAddress去connectionPool中查找地址对应的clientPool,没有则创建,再随机取其中一个client如果说客户端不为空。更新handler最后访问时间,没有则调用createClient(InetSocketAddress address)创建TransportClient。

private TransportClient createClient(InetSocketAddress address)
    throws IOException, InterruptedException {
  logger.debug("Creating new connection to {}", address);

  Bootstrap bootstrap = new Bootstrap();
  bootstrap.group(workerGroup)
    .channel(socketChannelClass)
    .option(ChannelOption.TCP_NODELAY, true)
    .option(ChannelOption.SO_KEEPALIVE, true)
    .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, conf.connectionTimeoutMs())
    .option(ChannelOption.ALLOCATOR, pooledAllocator);

  if (conf.receiveBuf() > 0) {
    bootstrap.option(ChannelOption.SO_RCVBUF, conf.receiveBuf());
  }

  if (conf.sendBuf() > 0) {
    bootstrap.option(ChannelOption.SO_SNDBUF, conf.sendBuf());
  }

  final AtomicReference<TransportClient> clientRef = new AtomicReference<>();
  final AtomicReference<Channel> channelRef = new AtomicReference<>();

  bootstrap.handler(new ChannelInitializer<SocketChannel>() {
    @Override
    public void initChannel(SocketChannel ch) {
        //客户端实际的创建地址
      TransportChannelHandler clientHandler = context.initializePipeline(ch);
      clientRef.set(clientHandler.getClient());
      channelRef.set(ch);
    }
  });

  // Connect to the remote server
  long preConnect = System.nanoTime();
  ChannelFuture cf = bootstrap.connect(address);
  if (!cf.await(conf.connectionTimeoutMs())) {
    throw new IOException(
      String.format("Connecting to %s timed out (%s ms)", address, conf.connectionTimeoutMs()));
  } else if (cf.cause() != null) {
    throw new IOException(String.format("Failed to connect to %s", address), cf.cause());
  }
    
    TransportClient client = clientRef.get();
    Channel channel = channelRef.get();
    assert client != null : "Channel future completed successfully with null client";
    long preBootstrap = System.nanoTime();
    logger.debug("Connection to {} successful, running bootstraps...", address);
    try {
        //使用clientBootstrap加工channel
        for (TransportClientBootstrap clientBootstrap : clientBootstraps) {
            clientBootstrap.doBootstrap(client, channel);
        }
    }
    return client;

createClient(InetSocketAddress address)其实就是使用nettyApi与服务端创建sockt连接clientRef存储TransportClient实例,channelRef存储socktChannel实例

public TransportChannelHandler initializePipeline(
    SocketChannel channel,
    RpcHandler channelRpcHandler) {
  try {
    TransportChannelHandler channelHandler = createChannelHandler(channel, channelRpcHandler);
    channel.pipeline()
      .addLast("encoder", ENCODER)
      .addLast(TransportFrameDecoder.HANDLER_NAME, NettyUtils.createFrameDecoder())
      .addLast("decoder", DECODER)
      //Netty的IdleStateHandler心跳机制主要是用来检测远端是否存活,如果不存活或活跃则对空闲Socket连接进行处理避免资源的浪费;
      .addLast("idleStateHandler", new IdleStateHandler(0, 0, conf.connectionTimeoutMs() / 1000))
      .addLast("handler", channelHandler);
    return channelHandler;
  } catch (RuntimeException e) {
    logger.error("Error while initializing Netty pipeline", e);
    throw e;
  }
}

TransportChannelHandler clientHandler = context.initializePipeline(ch);最终netty底层的 *** 作,ENCODER,DECODER等一些 *** 作都在这里做的,TransportChannelHandler才是更加定制的一些 *** 作

private TransportChannelHandler createChannelHandler(Channel channel, RpcHandler rpcHandler) {
  TransportResponseHandler responseHandler = new TransportResponseHandler(channel);
  TransportClient client = new TransportClient(channel, responseHandler);
  TransportRequestHandler requestHandler = new TransportRequestHandler(channel, client,
    rpcHandler, conf.maxChunksBeingTransferred());
    //client客户端requestHandler请求处理器,responseHandler返回处理器
  return new TransportChannelHandler(client, responseHandler, requestHandler,
    conf.connectionTimeoutMs(), closeIdleConnections);
}

上面TransportChannelHandler的创建过程,requestHandler请求处理器,responseHandler返回处理器的职责后面有机会再分析。

*TransportClient在创建时持有的都是TransportResponseHandler

TransportClient
public TransportClient(Channel channel, TransportResponseHandler handler) {
  this.channel = Preconditions.checkNotNull(channel);
  this.handler = Preconditions.checkNotNull(handler);
  this.timedOut = false;
}

TransportClient一共有五个方法用于发送请求,分别为:

fetchChunk:从远端协商好的流中请求单个块;
stream:使用流的ID,从远端获取流数据;
sendRpc:向服务端发送RPC的请求,通过At least Once Delivery原则保证请求不会丢失;
sendRpcSync:向服务端发送异步的RPC的请求,并根据指定的超时时间等待响应;
send:向服务端发送RPC的请求,但是并不期望能获取响应,因而不能保证投递的可靠性;

TransportServer
public TransportServer createServer(
    String host, int port, List<TransportServerBootstrap> bootstraps) {
  return new TransportServer(this, host, port, rpcHandler, bootstraps);
}

/** Creates a new server, binding to any available ephemeral port. */
public TransportServer createServer(List<TransportServerBootstrap> bootstraps) {
  return createServer(0, bootstraps);
}

public TransportServer createServer() {
  return createServer(0, new ArrayList<>());
}

最终都是调用TransportServer createServer(String host, int port, List bootstraps)

public TransportServer(
    TransportContext context,
    String hostToBind,
    int portToBind,
    RpcHandler appRpcHandler,
    List<TransportServerBootstrap> bootstraps) {
  this.context = context;
  this.conf = context.getConf();
  this.appRpcHandler = appRpcHandler;
  this.bootstraps = Lists.newArrayList(Preconditions.checkNotNull(bootstraps));

  boolean shouldClose = true;
  try {
    init(hostToBind, portToBind);
    shouldClose = false;
  } finally {
    if (shouldClose) {
      JavaUtils.closeQuietly(this);
    }
  }
}

TransportServer的构造器中的各个变量分别为:

context:即参数传递的TransportContext的引用;
conf:即TransportConf,这里通过调用TransportContext的getConf获取;
appRpcHandler:即RPC请求处理器RpcHandler;
bootstraps:即参数传递的TransportServerBootstrap列表;

private void init(String hostToBind, int portToBind) {

  IOMode ioMode = IOMode.valueOf(conf.ioMode());
  EventLoopGroup bossGroup = NettyUtils.createEventLoop(ioMode, 1,
    conf.getModuleName() + "-boss");
  EventLoopGroup workerGroup =  NettyUtils.createEventLoop(ioMode, conf.serverThreads(),
    conf.getModuleName() + "-server");

  PooledByteBufAllocator allocator = NettyUtils.createPooledByteBufAllocator(
    conf.preferDirectBufs(), true /* allowCache */, conf.serverThreads());

  bootstrap = new ServerBootstrap()
    .group(bossGroup, workerGroup)
    .channel(NettyUtils.getServerChannelClass(ioMode))
    .option(ChannelOption.ALLOCATOR, allocator)
    .option(ChannelOption.SO_REUSEADDR, !SystemUtils.IS_OS_WINDOWS)
    .childOption(ChannelOption.ALLOCATOR, allocator);

  this.metrics = new NettyMemoryMetrics(
    allocator, conf.getModuleName() + "-server", conf);

  if (conf.backLog() > 0) {
    bootstrap.option(ChannelOption.SO_BACKLOG, conf.backLog());
  }

  if (conf.receiveBuf() > 0) {
    bootstrap.childOption(ChannelOption.SO_RCVBUF, conf.receiveBuf());
  }

  if (conf.sendBuf() > 0) {
    bootstrap.childOption(ChannelOption.SO_SNDBUF, conf.sendBuf());
  }

  bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
    @Override
    protected void initChannel(SocketChannel ch) {
      RpcHandler rpcHandler = appRpcHandler;
      for (TransportServerBootstrap bootstrap : bootstraps) {
        rpcHandler = bootstrap.doBootstrap(ch, rpcHandler);
      }
      context.initializePipeline(ch, rpcHandler);
    }
  });

  InetSocketAddress address = hostToBind == null ?
      new InetSocketAddress(portToBind): new InetSocketAddress(hostToBind, portToBind);
  channelFuture = bootstrap.bind(address);
  channelFuture.syncUninterruptibly();

  port = ((InetSocketAddress) channelFuture.channel().localAddress()).getPort();
  logger.debug("Shuffle server started on port: {}", port);
}

1.创建bossGroup和workerGroup(根据Netty的API文档,Netty服务端需同时创建bossGroup和workerGroup。);
2.创建一个汇集ByteBuf但对本地线程缓存禁用的分配器;

3.调用Netty的API创建Netty的服务端根引导程序并对其进行配置;

4.为根引导程序设置管道初始化回调函数,此回调函数首先设置TransportServerBootstrap到根引导程序中,然后调用TransportContext的initializePipeline方法初始化Channel的pipeline;

5.给根引导程序绑定Socket的监听端口,最后返回监听的端口。

context.initializePipeline方法上面有说明,核心干的事ENCODER,DECODER等一些 *** 作都在这里做的,然后加上TransportChannelHandler channelHandler = createChannelHandler(channel, channelRpcHandler);

TransportChannelHandler
public class TransportChannelHandler extends ChannelInboundHandlerAdapter {

其实继承了ChannelInboundHandler重点关心TransportChannelHandler实现的channelRead

@Override
public void channelRead(ChannelHandlerContext ctx, Object request) throws Exception {
  if (request instanceof RequestMessage) {
    requestHandler.handle((RequestMessage) request);
  } else if (request instanceof ResponseMessage) {
    responseHandler.handle((ResponseMessage) request);
  } else {
    ctx.fireChannelRead(request);
  }
}

当TransportChannelHandler读取到的request是RequestMessage类型时,则将此消息的处理进一步交给TransportRequestHandler,当request是ResponseMessage时,则将此消息的处理进一步交给TransportResponseHandler。

public class TransportResponseHandler extends MessageHandler {
public class TransportRequestHandler extends MessageHandler {

TransportRequestHandler与TransportResponseHandler都继承自抽象类MessageHandler

public abstract class MessageHandler<T extends Message> {
  /** Handles the receipt of a single message. */
  public abstract void handle(T message) throws Exception;

  /** Invoked when the channel this MessageHandler is on is active. */
  public abstract void channelActive();

  /** Invoked when an exception was caught on the Channel. */
  public abstract void exceptionCaught(Throwable cause);

  /** Invoked when the channel this MessageHandler is on is inactive. */
  public abstract void channelInactive();
}

MessageHandler定义了几个方法:

  • handle:用于对接收到的单个消息进行处理;
  • channelActive:当channel激活时调用;
  • exceptionCaught:当捕获到channel发生异常时调用;
  • channelInactive:当channel非激活时调用;
public interface Message extends Encodable {
  /** Used to identify this request type. */
  Type type();

  /** An optional body for the message. */
  ManagedBuffer body();

  /** Whether to include the body of the message in the same frame as the message. */
  boolean isBodyInFrame();

Message定义

public interface Encodable {
  int encodedLength();

  void encode(ByteBuf buf);
}

实现Encodable接口的类将可以转换到一个ByteBuf中,多个对象将被存储到预先分配的单个ByteBuf,所以这里的encodedLength用于返回转换的对象数量。

从图看到最终的消息实现类都直接或间接的实现了RequestMessage或ResponseMessage接口,其中RequestMessage的具体实现有四种,分别是:

ChunkFetchRequest:请求获取流的单个块的序列。
RpcRequest:此消息类型由远程的RPC服务端进行处理,是一种需要服务端向客户端回复的RPC请求信息类型。
OneWayMessage:此消息也需要由远程的RPC服务端进行处理,与RpcRequest不同的是不需要服务端向客户端回复。
StreamRequest:此消息表示向远程的服务发起请求,以获取流式数据。
由于OneWayMessage 不需要响应,所以ResponseMessage的对于成功或失败状态的实现各有三种,分别是:

ChunkFetchSuccess:处理ChunkFetchRequest成功后返回的消息;
ChunkFetchFailure:处理ChunkFetchRequest失败后返回的消息;
RpcResponse:处理RpcRequest成功后返回的消息;
RpcFailure:处理RpcRequest失败后返回的消息;
StreamResponse:处理StreamRequest成功后返回的消息;
StreamFailure:处理StreamRequest失败后返回的消息;

ManagedBuffer
public abstract class ManagedBuffer {

具体属性代表的意义:

size:返回数据的字节数。
nioByteBuffer:将数据按照Nio的ByteBuffer类型返回。
createInputStream:将数据按照InputStream返回。
retain:当有新的使用者使用此视图时,增加引用此视图的引用数。
release:当有使用者不再使用此视图时,减少引用此视图的引用数。当引用数为0时释放缓冲区。
convertToNetty:将缓冲区的数据转换为Netty的对象,用来将数据写到外部。此方法返回的数据类型要么是io.netty.buffer.ByteBuf,要么是io.netty.channel.FileRegion。

Message的body方法返回的是ManagedBuffer;ManagedBuffer是一个抽象类。

具体的一些实现类。

TransportServerBootstrap
public interface TransportServerBootstrap {
  RpcHandler doBootstrap(Channel channel, RpcHandler rpcHandler);
}

TransportServerBootstrap的doBootstrap方法将对服务端的RpcHandler进行代理,接收客户端的请求。TransportServerBootstrap有SaslServerBootstrap和EncryptionCheckerBootstrap两个实现类。

public RpcHandler doBootstrap(Channel channel, RpcHandler rpcHandler) {
  return new SaslRpcHandler(conf, channel, rpcHandler, secretKeyHolder);
}

SaslServerBootstrap的doBootstrap实现

RpcHandler
public TransportChannelHandler initializePipeline(
    SocketChannel channel,
    RpcHandler channelRpcHandler) {
  try {
    TransportChannelHandler channelHandler = createChannelHandler(channel, channelRpcHandler);
    channel.pipeline()
      .addLast("encoder", ENCODER)
      .addLast(TransportFrameDecoder.HANDLER_NAME, NettyUtils.createFrameDecoder())
      .addLast("decoder", DECODER)
      .addLast("idleStateHandler", new IdleStateHandler(0, 0, conf.connectionTimeoutMs() / 1000))
      // NOTE: Chunks are currently guaranteed to be returned in the order of request, but this
      // would require more logic to guarantee if this were not part of the same event loop.
      .addLast("handler", channelHandler);
    return channelHandler;
  } catch (RuntimeException e) {
    logger.error("Error while initializing Netty pipeline", e);
    throw e;
  }
}

客户端和服务端的创建都用到了TransportContext.initializePipeline而里面需要传入RpcHandler rpcHandler;

private TransportChannelHandler createChannelHandler(Channel channel, RpcHandler rpcHandler) {
  TransportResponseHandler responseHandler = new TransportResponseHandler(channel);
  TransportClient client = new TransportClient(channel, responseHandler);
  TransportRequestHandler requestHandler = new TransportRequestHandler(channel, client,
    rpcHandler, conf.maxChunksBeingTransferred());
  return new TransportChannelHandler(client, responseHandler, requestHandler,
    conf.connectionTimeoutMs(), closeIdleConnections);
}

注意到requestHandler在创建时用到了rpcHandler。其实requestHandler相当于是代理了rpcHandler。所以有必要研究一下RpcHandler。因为client只是使用到的responseHandler而它在创建时与rpcHandler无关。所以其实rpcHandler只对服务端创建有影响

public abstract class RpcHandler {
  private static final RpcResponseCallback ONE_WAY_CALLBACK = new OneWayRpcCallback();
  public abstract void receive(
      TransportClient client,
      ByteBuffer message,
      RpcResponseCallback callback);
  public StreamCallbackWithID receiveStream(
      TransportClient client,
      ByteBuffer messageHeader,
      RpcResponseCallback callback) {
    throw new UnsupportedOperationException();
  }
  public abstract StreamManager getStreamManager();
  public void receive(TransportClient client, ByteBuffer message) {
    receive(client, message, ONE_WAY_CALLBACK);
  }
  public void channelActive(TransportClient client) { }
  public void channelInactive(TransportClient client) { }
  public void exceptionCaught(Throwable cause, TransportClient client) { }
  private static class OneWayRpcCallback implements RpcResponseCallback {

    private static final Logger logger = LoggerFactory.getLogger(OneWayRpcCallback.class);

    @Override
    public void onSuccess(ByteBuffer response) {
      logger.warn("Response provided for one-way RPC.");
    }

    @Override
    public void onFailure(Throwable e) {
      logger.error("Error response provided for one-way RPC.", e);
    }
  }
}

RpcHandler是一个抽象类,各个方法的做用:

receive:这是一个抽象方法,用来接收单一的RPC消息,具体处理逻辑需要子类去实现。

channelActive:当与给定客户端相关联的channel处于活动状态时调用。
channelInactive:当与给定客户端相关联的channel处于非活动状态时调用。
exceptionCaught:当channel产生异常时调用。
getStreamManager:获取StreamManager,StreamManager可以从流中获取单个的块,因此它也包含着当前正在被TransportClient获取的流的状态。

最后讲一下requestHandler的handle方法,前面已经强调过了requestHandler是在服务端使用的

public void handle(RequestMessage request) {
  if (request instanceof ChunkFetchRequest) {
    processFetchRequest((ChunkFetchRequest) request);
  } else if (request instanceof RpcRequest) {
    processRpcRequest((RpcRequest) request);
  } else if (request instanceof OneWayMessage) {
    processOneWayMessage((OneWayMessage) request);
  } else if (request instanceof StreamRequest) {
    processStreamRequest((StreamRequest) request);
  } else if (request instanceof UploadStream) {
    processStreamUpload((UploadStream) request);
  } else {
    throw new IllegalArgumentException("Unknown request type: " + request);
  }
}

可用看到针对不同类型的请求会有不同的处理方式

private void processFetchRequest(final ChunkFetchRequest req) {
  ManagedBuffer buf;
  try {
    streamManager.checkAuthorization(reverseClient, req.streamChunkId.streamId);
    buf = streamManager.getChunk(req.streamChunkId.streamId, req.streamChunkId.chunkIndex);
  } catch (Exception e) {
    logger.error(String.format("Error opening block %s for request from %s",
      req.streamChunkId, getRemoteAddress(channel)), e);
    respond(new ChunkFetchFailure(req.streamChunkId, Throwables.getStackTraceAsString(e)));
    return;
  }

  streamManager.chunkBeingSent(req.streamChunkId.streamId);
  respond(new ChunkFetchSuccess(req.streamChunkId, buf)).addListener(future -> {
    streamManager.chunkSent(req.streamChunkId.streamId);
  });
}

处理块获取请求

1.调用StreamManager的checkAuthorization方法,校验客户端是否有权限从给定的流中读取;
2.调用StreamManager的getChunk方法,获取单个的块(块被封装为ManagedBuffer)。由于单个的流只能与单个的TCP连接相关联,因此getChunk方法不能为了某个特殊的流而并行调用;
3.将ManagedBuffer和流的块Id封装为ChunkFetchSuccess后,调用respond方法返回给客户端。

private void processRpcRequest(final RpcRequest req) {
  try {
    rpcHandler.receive(reverseClient, req.body().nioByteBuffer(), new RpcResponseCallback() {
      @Override
      public void onSuccess(ByteBuffer response) {
        respond(new RpcResponse(req.requestId, new NioManagedBuffer(response)));
      }

      @Override
      public void onFailure(Throwable e) {
        respond(new RpcFailure(req.requestId, Throwables.getStackTraceAsString(e)));
      }
    });
}

处理RPC请求,将RpcRequest消息的内容体、发送消息的客户端以及一个RpcResponseCallback类型的匿名内部类作为参数传递给了RpcHandler的receive方法。这就是说真正用于处理RpcRequest消息的是RpcHandler,而非TransportRequestHandler。由于RpcHandler是抽象类,其receive方法也是抽象方法,所以具体的 *** 作将由RpcHandler的实现了receive方法的子类来完成。所有继承RpcHandler的子类都需要在其receive方法的具体实现中回调RpcResponseCallback的onSuccess(处理成功时)或者onFailure(处理失败时)方法。从RpcResponseCallback的实现来看,无论处理结果成功还是失败,都将调用respond方法对客户端进行响应。

private void processStreamRequest(final StreamRequest req) {
  ManagedBuffer buf;
  try {
    buf = streamManager.openStream(req.streamId);
  } catch (Exception e) {
    logger.error(String.format(
      "Error opening stream %s for request from %s", req.streamId, getRemoteAddress(channel)), e);
    respond(new StreamFailure(req.streamId, Throwables.getStackTraceAsString(e)));
    return;
  }

  if (buf != null) {
    streamManager.streamBeingSent(req.streamId);
    respond(new StreamResponse(req.streamId, buf.size(), buf)).addListener(future -> {
      streamManager.streamSent(req.streamId);
    });
  } else {
    respond(new StreamFailure(req.streamId, String.format(
      "Stream '%s' was not found.", req.streamId)));
  }
}

处理流请求:

  1. 调用StreamManager的openStream方法将获取到的流数据封装为ManagedBuffer;

  2. 当成功或失败时调用respond方法向客户端响应。

    private void processOneWayMessage(OneWayMessage req) {
      try {
        rpcHandler.receive(reverseClient, req.body().nioByteBuffer());
      } catch (Exception e) {
        logger.error("Error while invoking RpcHandler#receive() for one-way message.", e);
      } finally {
        req.body().release();
      }
    }
    

处理无需回复的RPC请求processOneWayMessage方法的实现processRpcRequest非常相似,区别在于processOneWayMessage调用了ONE_WAY_CALLBACK的receive方法,因而processOneWayMessage在处理完RPC请求后不会对客户端作出响应。

private ChannelFuture respond(Encodable result) {
  SocketAddress remoteAddress = channel.remoteAddress();
  return channel.writeAndFlush(result).addListener(future -> {
    if (future.isSuccess()) {
      logger.trace("Sent result {} to client {}", result, remoteAddress);
    } else {
      logger.error(String.format("Error sending result %s to %s; closing connection",
        result, remoteAddress), future.cause());
      channel.close();
    }
  });
}

最后使用netty调用了Channel的writeAndFlush方法来响应客户端。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/924496.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-16
下一篇 2022-05-16

发表评论

登录后才能评论

评论列表(0条)

保存