Pulsar_Machine_Development_Status_Updates_from_February_2008 - david-macmahon/wiki_convert_test GitHub Wiki

In February 2008 there was a second round of updates on the status of pulsar machines being developed using CASPER hardware and tools.

Request for Status Updates

The last round of pulsar-hardware updates and the useful discussion that followed (*) is 4 months ago by now, and many of you have e-mailed me about your progress since. To keep everyone up-to-date we would like to invite you all to reply with a short summary of the current status of your CASPER-related pulsar instrument plans or development. I'll keep a compilation on the wiki (#), as before.

For your further information, after the success of the CASPER workshop in summer 2007, we are planning a new follow-up workshop for August 2008. We expect that in this workshop the emphasis will shift from tool-flow introduction towards projects using CASPER technology and hope you too will present your work. http://casper.berkeley.edu/wiki/index.php?title=ws08

Kind regards, Joeri van Leeuwen

(*) http://casper.ssl.berkeley.edu/wiki/index.php?title=cp-oct07 (#) http://casper.ssl.berkeley.edu/wiki/index.php?title=cp-feb08

Glen Jones, Caltech

Hello All,

I looked back at my original status update from a few months ago and realized how far I've come, though still somewhat behind where I had expected to be.

I set up a page on the CASPER wiki to provide regular updates to the rest of the community and to help distribute my contributions: http://casper.berkeley.edu/wiki/index.php?title=CASPER_Development_at_Caltech

I have still not done much work on a pulsar machine per se, but have been putting together a lot of the building blocks that will be necessary.

I spent a considerable amount of time getting a reliable bonded link between the iBOB and the BEE2, where a data stream is ping-ponged on to the iBOBs two XAUI links and then reconstructed at the BEE2. In fact the system I have can synchronize the data from two iBOBs over four XAUI links. This also required getting a reliable clock distributed between the two systems. I am in the process of documenting this effort for the benefit of others. A pre-release version is on the wiki to give people an idea of what is coming.

I developed a highly pipelined, easy to understand vector accumulator to add to the CASPER block set. It has been tested in hardware and seems to work well to 256 MHz.

I have built various combinations of IQ, dual polarization, dual channel spectrometers (not all features at once) constrained to the iBOB and running at maximum speed as proof of concept and to get an idea of what fits in an FPGA. Lately I have spent a few weeks working on an IQ imbalance correction system, which is working well now. I am preparing for my candidacy exam in March, so I wanted to have some algorithms such as this that will appeal to electrical engineers. A report on this will also be forthcoming.

As for hardware, we now have two iBOBs, two ADCs, a BEE2, and an HP ProCurve 2900-24G 10Gb/1Gb switch. I have not had a chance to test the 10GbE aspect of the switch but plan to do so soon. If anyone has any designs that would be useful for such a test, please let me know.

Over the next couple of months I plan to clean up and distribute the iBOB-BEE2 link and the IQ correction system. I am going to start some work on a kurtosis-based spectrometer capable of RFI discrimination. The next big item on my list is making a DRAM transient buffer. This will also require getting 10 GbE up and running, since transferring Gbytes of DRAM through BORPH is not practical. I'd like to have most of the real-time hardware incoherent dedispersion transient trigger worked out in the next month. Sometime in the next few months we should have our first on-telescope tests, which I will be presenting at the URSI General Assembly.

Glenn

Berkeley

Summary

At Berkeley we mostly work on single-IBOB machines that feed CPU computing clusters. Our 2K-channel full-Stokes digital filterbank with accumulation and 10GE-output can process up to 512MHz of bandwidth. We are currently integrating one version of this digital filterbank with the Allen Telescope digital beamformer; a similar version might be deployed to Parkes and/or Effelsberg.

Below we attach a more detailed description of our work since October, with news on related filterbank precursor hardware for the ATA called Fly's Eye, and on the development of channelizers for coherent dedispersion clusters.

Details

Since October, our work has focused on four projects:

  • 1.) ATA pulsar spectrometer
  • 2.) Fly's Eye spectrometers
  • 3.) Parkes pulsar spectrometer
  • 4.) ASP-family digitizer+channelizer

1. ATA pulsar spectrometer with interface to digital beamformer

Our main focus at the moment is a pulsar spectrometer for the ATA that directly interfaces with the ATA beamformer over XAUI. Using experience from projects #2 and #3 below, Peter has built a prototype using ADCs as inputs, and tested this in the lab. The digital beamformer output (work by Billy Barrot and Oren Milgrome from the SETI Institute and UCB RAL) is similar in format to the ADC output and comes in over a single CX4. The current spectrometer features 108MHz of bandwidth, complex sampling (I/Q), full Stokes, 1K or 2K channels. Readout is done over 10GbE to 10GE->1Gbps switch to several 8-core Xeon machines. We've had some designs work with 2K channels, but are mainly developing using 1K where MATLAB is more stable.

2. Fly's Eye

Together with Andrew Siemion (CASPER, UCB Astronomy) Peter built a set of 44 independent spectrometers for individual ATA dishes, to maximize field of view for radio-transient detection. These were useful tests for pulsar hardware, too. Using 11 IBOBs @ 4 spectrometers each, all 208MHz-bandwidth spectrometers have 128 channels read out at 1600 spectra/second. The readout is done over 100Mbit with UDP packets, and this interface is what limits the readout rate. We use a PC running gulp (*) to save the spectra from all 11 IBOBs to hard disk for post-processing. We can recommend gulp, it's a zero-packet-loss (so-far), multi-threaded data recorder program. It was with this setup that we detected the first pulsars with the ATA.

(*) http://staff.washington.edu/corey/gulp/

3. Parkes spectrometer

Peter has built a 400MHz bandwidth, real-sampling dual-pol 1K channel spectrometer for use at the Parkes telescope. This IBOB-based design was the first CASPER design to use IBOB 10GbE output, and it has now been successfully tested in the lab. It is due to be sent to Australia in this month.

The readout rate is nominally 30kHz (which corresponds to a data rate of ~60Mbytes/sec) but can be changed at run time up to 195kHz (~380Mbytes/sec). Significantly slower readout (i.e. larger accumulation) is possible but requires increasing accumulator bit width.

We tested this design (as designs #1 and #4) using a direct connection from the IBOB to a PC with a 10GbE NIC, and via an HP ProCurve 3400cl 10GbE+1GbE switch (i.e. IBOB-to-switch is 10GbE and switch-to-PC is 1GbE).

4. ASP-family upgrade

We are working on possible IBOB-based upgrades to the ASP family of coherent dedispersion machines at Green Bank, Arecibo and Nancay. Such an IBOB setup could significantly increase bandwidth, up to say 1GHz over 2 IBOBs.

We are now testing a design that uses full 2x10Gbps output per IBOB while still including header information in the output stream. Peter will add it to the CASPER library once it has been finalized. This will allow us to build a ~500MHz digitizer+channelizer that directly outputs ~20Gbps of complex samples from the coarse PFB+FFT to a 10GbE -> 1GbE switch and subsequent x86 nodes for real-time processing.

Kind regards,

Don Backer, Joeri van Leeuwen, Peter McMahon and Dan Werthimer

KAT

The current KAT plans are to build a pulsar spectrometer for the Hartebeesthoek Radio Observatory (HartRAO) based on the Parkes design. KAT-7 (the 7-antenna prototype system to be deployed in 2009) will not be used for pulsar science, so the goal is to gain experience with HartRAO for now, in preparation for MeerKAT (~80 antennas, with beamformer) in 2012.

NRAO GB/CV

On April 17, the team working on the new pulsar backend for the GBT as part of the CICADA project (Configurable Instrument Collaboration for Agile Data Acquisition) achieved first light with GUPPI (Green Bank Ultimate Pulsar Processing Instrument). GUPPI is based on the open-source FPGA-based technologies developed by the CASPER group at U.C. Berkeley. A large number of NRAO staff members, students, and university collaborators have contributed to GUPPI since it was first proposed at the Green Bank Future Instrumentation Workshop in the fall of 2006. However, most of the effort has come only in the last 8 months from a smaller team comprised of John Ford (Project Manager), Randy McCullough (Project Engineer), Scott Ransom (Project Scientist), Patrick Brandt, Paul Demorest, Ron DuPlain, Glen Langston, and Jason Ray.

In its first-light configuration, GUPPI already had more time and frequency resolution than the GBT Spigot and output all four Stokes parameters as well -- for a data rate of 190 MB/s! While GUPPI was configured to process 400-MHz of bandwidth via 8-bit sampling, the observations were made with the Green Bank 43-m telescope and an IF that passed only 200-MHz of bandwidth centered at 2 GHz. The target was the "young" 3-ms millisecond pulsar in the globular cluster M28 known as PSR B1821-24 (or M28A). With its large dispersion measure and complex pulse profile it is a difficult target for conventional filterbank-style pulsar backends, but GUPPI's detection looks fantastic.

Over the next 1-2 months, GUPPI will be used for several expert-user observations in its "Phase I" configuration which will eventually replace the GBT Spigot: 2048 or 4096 channels over 800 MHz of bandwidth with either total intensity or Full Stokes output. Meanwhile, work will continue on GUPPI "Phase II", which will be a wide-bandwidth (500-800 MHz) coherent-dedispersion backend built for high-precision and high-sensitivity pulsar timing observations.

For more information, see CICADA or GUPPI.

Scott Ransom, for the GUPPI Project Team.