Skip to content
Josh Blum edited this page Dec 13, 2017 · 4 revisions

Roadmap major features

Expose additonal performance metrics through API. Currently we collect basic thoughput per port. More detail is needed, and short-run, long-run averages.

Expose live performance metrics into the graphical tool. Metrics could be displayed on the connection as a color or as a throughput number. Or add additional dock panels to enumerate and display metrics.

We are looking to support and demonstrate Pothos on a variety of intersting hardware platforms. The following is a list of planned demonstrations. Futher suggestions and hardware donations are welcome.

  • Use the Pothos FPGA project to control a topology of processing blocks within the FPGA. The connections in the topology will automatically configure the routing within the FPGA. Use the topology and virtual channels to "snoop" on flows within the FPGA.
  • Zero-copy scheduler buffer integration to shared memory/DMA flows to/from the FPGA using the custom buffer API. This also implies automatic GPP ingress and outgress when FPGA blocks from the Pothos FPGA project are connected to GPP blocks.
  • Demonstrate using the GUI and framework to remotely deploy a topology onto an ARM board. Using the graphical widgets to control and monitor the remote topology.
  • Demo buffer integration and the Pothos FPGA project on Zynq through shared memory

Finish the Java-facing bindings so that Java can call into the Pothos Proxy API. And create java-style wrapper for Pothos::Block so the Block API best matches accepted coding practices in Java.

C# bindings, both through mono and windows managed code.

Using the graphical tool to create topological hierarchies of processing blocks. Hierarchies can be included in top level designs, and deployed on available hosts. Modifications to hierarchies should cause re-evaluation of client topologies.

Create a processing block in one of the supported languages. The block will be compiled and deployed on available hosts.

If a node in the network goes down, automatically redeploy the design on the next available node.

Allow the user to describe an expected thoughput, or latency. Attempt to alter parameters to meet the constraint; such as modifying the buffer sizer per work() or applying high priority, or migrating hosts.

Clone this wiki locally