简体   繁体   English

是否从akka I / o中删除了管道?

[英]Were pipelines removed from akka i/o?

While learning how to use akka I/OI am trying to implement a simple protocal on top of akka i/o and was following the documentation here . 在学习如何使用akka I / O时,我试图在akka I / o之上实现一个简单的协议,并在这里关注文档。

However in my gradle file I use version 2.3.9 as shown below 但是,在我的gradle文件中,我使用的是2.3.9版,如下所示

dependencies {
    compile group: 'org.slf4j', name: 'slf4j-log4j12', version: '1.7.7'
    compile group: 'com.typesafe.akka', name: 'akka-actor_2.11', version: '2.3.9'
    compile group: 'com.typesafe.akka', name: 'akka-contrib_2.11', version: '2.3.9'
    compile group: 'org.scala-lang', name: 'scala-library', version: '2.11.5'
    testCompile group: 'junit', name: 'junit', version: '4.11'
}

import of some things that are pipeline specific like 导入一些特定于管道的东西,例如

import akka.io.SymmetricPipelineStage;
import akka.io.PipelineContext;
import akka.io.SymmetricPipePair;

generate can not resolve symbol errors. 生成无法解决符号错误。

Hence my questions. 因此,我的问题。

  1. Were these removed or there is some dependancy I need to add to my gradle file. 这些被删除或有一些依赖我需要添加到我的gradle文件中。
  2. If they were removed, how would the encod/decode stage be dealt with? 如果将它们删除,那么编码/解码阶段将如何处理?

Pipelines were experimental and indeed removed in Akka 2.3. 管道是实验性的,实际上在Akka 2.3中已删除。 The removal was documented in the Migration Guide 2.2.x to 2.3.x . 删除记录在《 迁移指南2.2.x至2.3.x》中

There is also mention of being able to package the "older" pipeline implementation with Akka 2.3 here , though it doesn't appear to be a simple addition of a dependency. 另外也提到了能够与打包阿卡2.3“前辈”管道实施在这里 ,虽然它并不似乎是一个简单的加法依赖的。

I would wager that Akka Streams is intended to be the better replacement of pipelines, coming in Akka 2.4, but available now as an experimental module . 我敢打赌,Akka Streams旨在更好地替代管道,它是Akka 2.4中的新功能,但现在可以作为实验模块使用 The encode/decode stage or protocol layer can be handled by using Akka Streams in conjunction with Akka I/O. 可以通过将Akka流与Akka I / O结合使用来处理编码/解码阶段或协议层。

Yes, pipelines were removed without any alternatives. 是的,删除了管道,没有其他选择。 I came from Netty world and don't find pipelines "unintuitive" - they accumulate buffers and supply children actors with ready to use messages. 我来自Netty世界,没有发现“不直观”的管道-它们会积累缓冲区并为子参与者提供准备使用的消息。

Take a look at our solutions, it requires "org.scalaz" %% "scalaz-core" % 7.2.14 as a dependency. 看一下我们的解决方案,它需要"org.scalaz" %% "scalaz-core" % 7.2.14作为依赖项。

Codec class is a State monad which is being called by the actor and produces output. 编解码器类是一种State单声道,被演员调用并产生输出。 In our projects we are using Varint32 protobuf encoding , so every message is prepended with varint32 length field: 在我们的项目中,我们使用Varint32 protobuf encoding ,因此每条消息都以varint32 length字段varint32

import com.google.protobuf.CodedInputStream
import com.trueaccord.scalapb.{GeneratedMessage, GeneratedMessageCompanion, Message}
import com.zeptolab.tlc.front.codecs.Varint32ProtoCodec.ProtoMessage

import scalaz.{-\/, State, \/, \/-}

trait Accumulator
trait Codec[IN, OUT] {

  type Stream = State[Accumulator, Seq[IN]]

  def decode(buffer: Array[Byte]): Throwable \/ IN

  def encode(message: OUT): Array[Byte]

  def emptyAcc: Accumulator

  def decodeStream(data: Array[Byte]): Stream

}

object Varint32ProtoCodec {

  type ProtoMessage[T] = GeneratedMessage with Message[T]

  def apply[IN <: ProtoMessage[IN], OUT <: ProtoMessage[OUT]](protoType: GeneratedMessageCompanion[IN]) = new Varint32ProtoCodec[IN, OUT](protoType)

}

class Varint32ProtoCodec[IN <: ProtoMessage[IN], OUT <: ProtoMessage[OUT]](protoType: GeneratedMessageCompanion[IN]) extends Codec[IN, OUT] {

  import com.google.protobuf.CodedOutputStream

  private case class AccumulatorImpl(expected: Int = -1, buffer: Array[Byte] = Array.empty) extends Accumulator

  override def emptyAcc: Accumulator = AccumulatorImpl()

  override def decode(buffer: Array[Byte]): Throwable \/ IN = {
    \/.fromTryCatchNonFatal {
      val dataLength = CodedInputStream.newInstance(buffer).readRawVarint32()
      val bufferLength = buffer.length
      val dataBuffer = buffer.drop(bufferLength - dataLength)
      protoType.parseFrom(dataBuffer)
    }
  }

  override def encode(message: OUT): Array[Byte] = {
    val messageBuf = message.toByteArray
    val messageBufLength = messageBuf.length
    val prependLength = CodedOutputStream.computeUInt32SizeNoTag(messageBufLength)
    val prependLengthBuffer = new Array[Byte](prependLength)
    CodedOutputStream.newInstance(prependLengthBuffer).writeUInt32NoTag(messageBufLength)
    prependLengthBuffer ++ messageBuf
  }

  override def decodeStream(data: Array[Byte]): Stream = State {
    case acc: AccumulatorImpl =>
      if (data.isEmpty) {
        (acc, Seq.empty)
      } else {
        val accBuffer = acc.buffer ++ data
        val accExpected = readExpectedLength(accBuffer, acc)
        if (accBuffer.length >= accExpected) {
          val (frameBuffer, restBuffer) = accBuffer.splitAt(accExpected)
          val output = decode(frameBuffer) match {
            case \/-(proto) => Seq(proto)
            case -\/(_) => Seq.empty
          }
          val (newAcc, recOutput) = decodeStream(restBuffer).run(emptyAcc)
          (newAcc, output ++ recOutput)
        } else (AccumulatorImpl(accExpected, accBuffer), Seq.empty)
      }
    case _ => (emptyAcc, Seq.empty)
  }

  private def readExpectedLength(data: Array[Byte], acc: AccumulatorImpl) = {
    if (acc.expected == -1 && data.length >= 1) {
      \/.fromTryCatchNonFatal {
        val is = CodedInputStream.newInstance(data)
        val dataLength = is.readRawVarint32()
        val tagLength = is.getTotalBytesRead
        dataLength + tagLength
      }.getOrElse(acc.expected)
    } else acc.expected
  }

}

And the Actor is: 演员是:

import akka.actor.{Actor, ActorRef, Props}
import akka.event.Logging
import akka.util.ByteString
import com.zeptolab.tlc.front.codecs.{Accumulator, Varint32ProtoCodec}
import com.zeptolab.tlc.proto.protocol.{Downstream, Upstream}

object FrameCodec {
  def props() = Props[FrameCodec]
}

class FrameCodec extends Actor {

  import akka.io.Tcp._

  private val logger       = Logging(context.system, this)
  private val codec        = Varint32ProtoCodec[Upstream, Downstream](Upstream)
  private val sessionActor = context.actorOf(Session.props())

  def receive = {
    case r: Received =>
      context become stream(sender(), codec.emptyAcc)
      self ! r
    case PeerClosed => peerClosed()
  }

  private def stream(ioActor: ActorRef, acc: Accumulator): Receive = {
    case Received(data) =>
      val (next, output) = codec.decodeStream(data.toArray).run(acc)
      output.foreach { up =>
        sessionActor ! up
      }
      context become stream(ioActor, next)
    case d: Downstream =>
      val buffer = codec.encode(d)
      ioActor ! Write(ByteString(buffer))
    case PeerClosed => peerClosed()
  }

  private def peerClosed() = {
    logger.info("Connection closed")
    context stop self
  }

}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在自定义OrderedLinkedList类中显示从列表中删除的元素? - How can I display which elements were removed from list in custom OrderedLinkedList class? 是从春季5删除了GlobalSession范围的bean吗? 为什么? - Were GlobalSession scoped beans removed from spring 5? Why? 如何从Set / Map中删除多个元素并知道删除了哪些元素? - How to remove multiple elements from Set/Map AND knowing which ones were removed? 无法休眠通过级联删除子实例,这些实例已从一对多关系中删除 - Can't get hibernate to delete children instances via cascade, that were removed from a one-to-many relation 如何从ListChangeListener.Change知道哪些元素已被删除? - How to know from ListChangeListener.Change about which elements were removed? 如果可能,使用ByteIterator的新Akka I / O(Java)TCP数据处理需要一个Java示例 - new Akka I/O (Java) TCP data processing using ByteIterator need a java example if possible 用浏览器手动删除cookie后的请求 - request after cookies were removed manually with browser SecurityException来自并行流中的I / O代码 - SecurityException from I/O code in a parallel stream I / O从文件中读取对象数据 - I/O Reading in object data from a file 删除某些条目时更新数据库JPA - Update database when some entries were removed JPA
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM