Skip to content

Google Summer of Code 2015

Chris Aniszczyk edited this page Mar 2, 2015 · 15 revisions

Google Summer of Code 2015

At Twitter, we love Open Source, working with students and Google Summer of Code (GSOC)! What is GSOC? Every year, Google invites students to come up with interesting problems for their favorite open-source projects and work on them over the summer. Participants get support from the community, plus a mentor who makes sure you don't get lost and that you meet your goals. Aside from the satisfaction of solving challenging problems and contributing to the open source community, students get paid and get some sweet swag for their work! In our opinion, this is a great opportunity to get involved with open source, improve your skills and help out the community!

If you're interested in Outreachy (formerly Outreach Program for Women) as an option, please see that wiki: https://github.com/twitter/twitter.github.com/wiki/Outreachy-(Round-10)

Information for Students

These ideas were contributed by our developers and our community, they are only meant to be a starting point. If you wish to submit a proposal based on these ideas, you may wish to contact the developers and find out more about the particular suggestion you're looking at.

Being accepted as a Google Summer of Code student is quite competitive. Accepted students typically have thoroughly researched the technologies of their proposed project and have been in frequent contact with potential mentors. Simply copying and pasting an idea here will not work. On the other hand, creating a completely new idea without first consulting potential mentors is unlikely to work out.

If there is no specific contact given you can ask questions via @TwitterOSS or via the twitter-gsoc mailing list.

Accepted Projects

For 2015, @TwitterOSS accepted X students to work on Y different open source projects:

The project details are listed below:

Adding a Proposal

Please follow this template:

  • Brief explanation:
  • Expected results:
  • Knowledge Prerequisite:
  • Mentor:

When adding an idea to this section, please try to include the following data.

If you are not a developer but have a good idea for a proposal, get in contact with relevant developers first or @TwitterOSS.

Project Ideas

A good starting point is Finagle is the Quickstart: http://twitter.github.io/finagle/guide/Quickstart.html

You could also start digging in the code here: https://github.com/twitter/finagle/

Check out the Finagle mailing list if you have any questions.

finagle-http2

  • Brief explanation: HTTP/2 has been finalized and offers advantages over HTTP 1 (such as multiplexing) that would be useful for Finagle HTTP clients and servers.
  • Expected results: An experimental finagle-http2 implementation
  • Knowledge prerequisites: Scala, HTTP/2, and an interest in learning about Finagle
  • Mentors: Travis Brown (@travisbrown)

Kerberos authentication in Mux

  • Brief explanation: Mux is a new RPC session protocol in use at Twitter. We would like to add kerberos authentication.
  • Expected results: Add KRB support to finagle-mux
  • Knowledge Prerequisite: Scala, Distributed systems
  • Mentor: Marius Eriksen (@marius) and Steve Gury (@stevegury)

Examples and Service Adaptors for Stitch

  • Brief explanation: Stitch is a library for RPC service composition that makes it easy to take advantage of batch APIs without muddling up your code with explicit batching logic. We'd like to develop better examples and tools for developers who want to use Stitch in the context of Finagle.
  • Expected results: Create an example project for Stitch
  • Knowledge prerequisites: Scala and an interest in learning about Finagle
  • Mentors: Travis Brown (@travisbrown)

Alternative Service Representations for Scrooge

  • Brief explanation: Scrooge is a Thrift code generator that can create client and server adaptors for Finagle, but the current representation of service interfaces makes it difficult to wrap endpoints in Finagle filters, for example. We're interested in exploring other approaches that would allow Scrooge-generated clients and servers to fit more cleanly into the abstractions provided by Finagle.
  • Expected results: Submit a contribution to Scrooge using a new style client/server style
  • Knowledge prerequisites: Scala and an interest in learning about Thrift and Finagle
  • Mentors: Nik Shkrob (@nshkrob)

Libprocess Benchmark Suite

  • Brief explanation: Implement a benchmark suite for libprocess to identify potential performance improvements and test for performance regressions.
  • Knowledge Prerequisite: C++
  • Mentor: Ben Mahler (@bmahler) Jie Yu (@jie_yu)
  • JIRA ISsue: MESOS-1018

Summingbird is a library that lets you write MapReduce programs that look like native Scala or Java collection transformations and execute them on a number of well-known distributed MapReduce platforms, including Storm and Scalding.

Addition of Akka backend for streaming compute

  • Brief explanation: Akka(http://akka.io) is a popular open source distributed actor system. Integrating this into Summingbird would increase the range of potential compute platform for users. Making the system more accessible and suitable for more varied tasks.
  • Expected results: Add akka support via a summingbird-akka module
  • Knowledge Prerequisite: Need to know scala (or strong knowledge of Java with some functional programming background). Need to be somewhat familiar with Hadoop.
  • Mentor: Oscar Boykin (@posco)

Addition of Samza backend for streaming compute

  • Brief explanation: Samza(http://samza.incubator.apache.org/) is a new Apache incubator project allowing compute to be placed between two Kafka streams. Integrating this into Summingbird would increase the range of potential compute platform for users. Making the system more accessible and suitable for more varied tasks.
  • Expected results: Add samza support via a summingbird-samza module
  • Knowledge Prerequisite: Need to know scala (or strong knowledge of Java with some functional programming background). Need to be somewhat familiar with Hadoop, Yarn.
  • Mentor: Oscar Boykin (@posco) or Ian O'Connell (@0x138)

Better Spark Support in Summingbird

  • Brief explanation: We currently have an alpha version of Spark support for batch computation. This should be completed along with creating a demo application. After that, we should add a realtime layer using spark-streaming.
  • Expected results: Add spark-streaming support to summingbird along with a demo project
  • Knowledge Prerequisite: Need to know scala (or strong knowledge of Java with some functional programming background). Need to be somewhat familiar with Spark.
  • Mentor: Oscar Boykin (@posco) or Ian O'Connell (@0x138)

Addition of Tez backend for offline batch compute

  • Brief explanation: Tez(http://tez.incubator.apache.org) is a new Apache incubator to generalize and expand the map/reduce model of computation. Summingbird should be able to automatically take advantage of map-reduce-reduce plans, and other optimizations that Tez enables. This should perform better than the existing Hadoop-via-cascading-via-scalding backend that is currently available.
  • Expected results: Add tez support via a summingbird-tez module
  • Knowledge Prerequisite: Need to know scala (or strong knowledge of Java with some functional programming background). Need to be somewhat familiar with Hadoop, Yarn.
  • Mentor: Ian O'Connell (@0x138)

Addition of batch key/value store on Mesos or Yarn

  • Brief explanation: Something that is sorely missing from the open source release of scalding is a good batch-writable read-only key-value store to use for batch jobs. This could be something like ElephantDB (https://github.com/nathanmarz/elephantdb) or HBase. Having such a project set up with Summingbird would be a huge coup for the open-source community.
  • Expected results: Add a batch key/value store on Mesos or Yarn
  • Knowledge Prerequisite: Need to know scala (or strong knowledge of Java with some functional programming background). Ideally familiar with mesos or yarn, and low latency key-value stores likes HBase or ElephantDB.
  • Mentor: Oscar Boykin (@posco) or Ian O'Connell (@0x138)

Scalding Twitter's library for programming in scala on Hadoop. It is approachable by new-comers with a fields/Data-frame-like API as well as a type-safe API. There is also a linear algebra API to support working with giant matrices and vectors on Hadoop.

Apace Tez support for Scalding

  • Brief explanation: Cascading 3 supports Apache Tez, which may compete in some workloads with spark. If we update Scalding to use Cascading 3, we should be able to run Scalding on Tez. There are lots of little issues here and a few big ones as some concepts from Hadoop are not present in Tez (the distributed cache changes) and some cascading features are not yet supported.
  • Expected results: Add tez support to Scalding along with documentation / demo project
  • Knowledge Prerequisite: Need to know scala (or strong knowledge of Java with some functional programming background). Need to be somewhat familiar with Hadoop. Must be familiar with graphs for modeling flows of computation.
  • Mentor: Oscar Boykin @posco

Query Optimization in Scalding

  • Right now, Scalding has some optimizations that it can do because it can see the types of the data and functions. Those optimizations are baked into how the graph is produced. This project would instead create an AST in the Typed-API of scalding, and only just before running would we do a global optimization to produce the most optimal cascading plan. This work can leverage existing code created for summingbird to optimize this graphs.
  • Expected results: Add query optimization to Scalding
  • Knowledge Prerequisite: Need to know scala (or strong knowledge of Java with some functional programming background). Need to be somewhat familiar with Hadoop. Must be familiar with graphs for modeling flows of computation.
  • Mentor: Oscar Boykin @posco

https://github.com/Parquet/parquet-mr/issues?labels=GSoC-2014&state=open

Use statistics to implement page level filtering in the filter2 API

  • Brief explanation: We currently apply filters to entire row groups as well as individual records, but we could apply them to pages as well. This would work similar to how row group filtering currently works.
  • Expected results:
    • Statistics based filtering applied to pages in the parquet read path
    • Additional tests for correctness
  • Knowledge Prerequisite: Java, Hadoop, Test frameworks
  • Mentor: Alex Levenson (@THISWILLWORK) and/or Julien Le Dem (@J_)

Collect more statistics for rowgroups + pages

  • Brief explanation: We currently only collect the number of records, number of nulls, min, and max for a chunk of records. We could make use of more statistics when filtering in the read path.
  • Expected results:
    • Investigate which statistics would be most useful
    • Add more types of statistics, such as:
      • A bloom filter of the values when a chunk is not dictionary encoded (good for filtering)
      • A HyperLogLog of the values (good for fast count-distinct)
      • A CountMinSketch of the values (good for heavy hitters)
    • Additional tests for correctness
  • Knowledge Prerequisite: Java, Hadoop, Test frameworks
  • Mentor: Alex Levenson (@THISWILLWORK) and/or Julien Le Dem (@J_)

Add more filters to the filter2 API

  • Brief explanation: Parquet can currently filter values by ==, !=, >, >=, <, <= -- we could add some more, for example filter by value in(1,2,3) or notIn(1,2,3)
  • Expected results:
    • Add more filter types to the filter2 API
    • Additional tests for correctness
  • Knowledge Prerequisite: Java, Hadoop, Test frameworks
  • Mentor: Alex Levenson (@THISWILLWORK) and/or Julien Le Dem (@J_)

Take advantage of dictionary encoding in the filter2 API

  • Brief explanation: When applying filters to dictionary encoded columns, apply the filter to the dictionary instead of to the individual values.
  • Expected results:
    • Use dictionaries when filtering row groups
    • Use dictionaries when filtering individual records
    • Additional tests for correctness
  • Knowledge Prerequisite: Java, Hadoop, Test frameworks
  • Mentor: Alex Levenson (@THISWILLWORK) and/or Julien Le Dem (@J_)

Profile and improve the assembly time for parquet-thrift

  • Brief explanation: When assembling thrift records, parquet uses a TProtocol that boxes every value in an anonymous class. Investigate an implement a more efficient solution.
  • Expected results:
    • Faster implementation of assembling thrift records
  • Knowledge Prerequisite: Java, Hadoop, Test frameworks
  • Mentor: Alex Levenson (@THISWILLWORK) and/or Julien Le Dem (@J_)

Parquet compatibility across tools

  • Brief explanation: Develop cross tools compatibility tests for parquet (https://github.com/Parquet/parquet-mr/issues/300)
  • Expected results:
    • Compatibility of nested data types across tools - pig, hive, avro, thrift etc.
    • Automated compatibility check between java implementation and impala (across release versions)
  • Knowledge Prerequisite: Java, Hadoop, Test frameworks
  • Mentor: Alex Levenson (@THISWILLWORK) and/or Julien Le Dem (@J_)

Decouple Parquet from the Hadoop API

Study state of the art floating point compression algorithms

(https://github.com/Parquet/parquet-mr/issues/306)

  • Brief explanation: Study existing lossless floating point compression papers and implement benchmarks.
  • Expected results: Provide reference implementation and benchmark comparison. With integration into the Parquet library
  • Mentor: Julien Le Dem (@J_)

You can learn more about getting involved with the Netty Project here: http://netty.io/community.html

Android testsuite

  • Brief explanation:
    • Netty project team is willing to support Android 4.0 Ice Cream Sandwich officially, and we need an automated testsuite to achieve the goal.
  • Expected results:
    • During the build process, an Android emulator is automatically started and stopped to run all (or applicable) JUnit tests inside the Android emulator.
    • The result of the JUnit tests inside the emulator affects the build result so that we can run the Android compatibility test in our CI machine.
    • All Android compatibility issues found during the test are fixed.
  • Knowledge Prerequisite:
    • Java and Android programming
    • Custom JUnit runners
    • Experience with building a network application atop Netty
  • Mentor: Trustin Lee (@trustin)

For more information about Pants, check these out:

Pants Interactive Tutorial

  • Brief explanation: Add an interactive tutorial to learn Pants
  • Expected results: Create an interactive tutorial that will guide you in learning pants. Use JQueryTerminal to help stage a prompt that guides you through some simple Pants usage cases.
  • Knowledge Prerequisite: Python, Java, Javascript
  • Mentor: Chris Aniszczyk (@cra)

Eclipse Integration

  • Brief explanation: Add Eclipse integration to Pants
  • Expected results: Create a classpath container based on integrating with Pants and a launcher.
  • Knowledge Prerequisite: Python, Java, Eclipse
  • Mentor: Chris Aniszczyk (@cra)

You can read more about the project here https://github.com/pantsbuild/intellij-pants-plugin#intellij-pants-plugin

Running tests using Pants.

Pants Plugin can compile using Menu “Build” or “Rebuild” using Pants. However we still rely on IntelliJ built in JUnit Test Runner to run tests. Users can manually create Pants Run Configuration to run tests.

We want to swap the Intellij in Built in Junit Test Runner and create Pants Run configuration whenever users run tests using the Menu option or Right Click Action. The next step will be to add functionality to pants to construct intellij test output. This will generate pretty Test Tree View.

This project will give you opportunity to work on intellij-pants-plugin and pants.

Import Pants Python projects

Pants Plugin cannot import java scala projects. We want to add the functionality to import python projects. This will require you to add necessary apis to pants to get dependencies for the project. This project will require understanding on how python requirements are specified and resolved in pants. The minimum expectations for the imported project are: User should be able to navigate through source code. Clicking on 3rdparty dependency imports should take you the source code of the python library.

Pants Plugin Wizard.

IntelliJ provides a wizard to create a new project, easily configure basic dependencies and start working with it right away. Currently we don’t have such integration for the Pants plugin.

Sample Import Wizard Screen

We want to start with creating a simple wizard for Pants project with just a few basic templates for Java and Scala project. In the wizard a user will be able to choose between types of the project he want to create, configure main class if needed, add some additional dependencies. After everything is configured a project will be created with Build files and targets. The next step will be to add an ability to create custom templates in your repo. For example here at Twitter we usually create some Thrift services with a common structure and initial configuration. It will be great to be able to create Pants projects for such services just in a few clicks.

Project

Project URL

Project Idea (e.g., New Feature)

  • Brief explanation:
  • Expected results:
  • Knowledge Prerequisite:
  • Mentor:

General Proposal Requirements

Proposals will be submitted via http://www.google-melange.com/gsoc/homepage/google/gsoc2014, therefore plain text is the best way to go. We expect your application to be in the range of 1000 words. Anything less than that will probably not contain enough information for us to determine whether you are the right person for the job. Your proposal should contain at least the following information, but feel free to include anything that you think is relevant:

  • Please include your name and twitter handle!
  • Title of your proposal
  • Abstract of your proposal
  • A link to your github id (if you have one)
  • Detailed description of your idea including explanation on why is it innovative
  • Description of previous work, existing solutions (links to prototypes, bibliography are more than welcome)
  • Mention the details of your academic studies, any previous work, internships
  • Any relevant skills that will help you to achieve the goal (programming languages, frameworks)?
  • Any previous open-source projects (or even previous GSoC) you have contributed to?
  • Do you plan to have any other commitments during SoC that may affect you work? Any vacations/holidays planned?
  • Contact details

Good luck!

Clone this wiki locally