Hortonworks.com
  • Explore
    • All Tags
    • All Questions
    • All Repos
    • All SKB
    • All Articles
    • All Ideas
    • All Users
    • All Badges
    • Leaderboard
  • Create
    • Ask a question
    • Add Repo
    • Create Article
    • Post Idea
  • Tracks
    • All Tracks
    • Community Help
    • Cloud & Operations
    • CyberSecurity
    • Data Ingestion & Streaming
    • Data Processing
    • Data Science & Advanced Analytics
    • Design & Architecture
    • Governance & Lifecycle
    • Hadoop Core
    • Sandbox & Learning
    • Security
    • Solutions
  • Login
HCC Hortonworks Community Connection
  • Home /
  • Data Ingestion & Streaming /
avatar image

Which compression is used in Site-to-Site (Remote Process Group)

Question by Andrew Grande Oct 25, 2015 at 05:07 PM Nifihdfcompressionsite2sitedataflow
  1. Which compression algorithm is used when a remote port communication is set up?
  2. Can it be customized?
  3. Does it work on a FlowFile level or the batch that s2s protocol negotiated for transmission?

remote-port-connection-status.png (57.3 kB)
Comment

People who voted for this

0 Show 0
10 |6000 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users

1 Reply

· Add your reply
  • Sort: 
  • Votes
  • Created
  • Oldest
avatar image
Best Answer

Answer by jwitt · Oct 25, 2015 at 05:23 PM

Site-to-Site uses deflate at level 1 and compresses data in blocks/buffers. With site-to-site a series of 1..N flowfiles are sent at once and ack'd as a group. It is not configurable at this time. Keep in mind of course you can of course compress before sending to s2s and decompress after receiving from s2s using the CompressContent processor.

Do you feel there would be a good bit of value in letting the compression of s2s be configurable? If so would that be for cases like where snappy makes sense because it is certain types of text data?

Thanks

Joe

Comment

People who voted for this

0 Show 1 · Share
10 |6000 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users
avatar image Andrew Grande ♦ · Oct 25, 2015 at 05:30 PM 0
Share

Yes, Joe, I had something like snappy in my mind as a good middle ground between size and performance.

As a minimum, a compression level property should be exposed to the operator to balance an existing compression protocol between speeed/cpu load and network traffic volume.

Your answer

Hint: You can notify a user about this post by typing @username

Up to 5 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

22
Followers
follow question

Answers Answer & comments

This website uses cookies for analytics, personalisation and advertising. To learn more or change your cookie settings, please read our Cookie Policy. By continuing to browse, you agree to our use of cookies.

HCC Guidelines | HCC FAQs | HCC Privacy Policy | Privacy Policy | Terms of Service

© 2011-2019 Hortonworks Inc. All Rights Reserved.

Hadoop, Falcon, Atlas, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie and the Hadoop elephant logo are trademarks of the Apache Software Foundation.

  • Anonymous
  • Login
  • Create
  • Ask a question
  • Add Repo
  • Create SupportKB
  • Create Article
  • Post Idea
  • Tracks
  • Community Help
  • Cloud & Operations
  • CyberSecurity
  • Data Ingestion & Streaming
  • Data Processing
  • Data Science & Advanced Analytics
  • Design & Architecture
  • Governance & Lifecycle
  • Hadoop Core
  • Sandbox & Learning
  • Security
  • Solutions
  • Explore
  • All Tags
  • All Questions
  • All Repos
  • All SKB
  • All Articles
  • All Ideas
  • All Users
  • Leaderboard
  • All Badges