Tuesday, September 30, 2014

My New Development Setup

I have had people ask me about my development setup and since I just upgraded to a new machine this past weekend, I figured I would blog (*cough* brag *cough*) about it.  First a little history:

I have been a Windows developer for most of my career so my work development machines have always been Windows based.  I switched my home computer to Mac in 2005 and can honestly say my life is better because of it.  I have 3 kids and a wife and have had to do very little system admin work at home during the past 9 years - Apple stuff just works.  When I joined OnPoint Medical Diagnostics in 2011, I decided to switch to Mac for work even though we were developing on Windows.  This may sound strange to some, but Apple hardware is generally cutting edge and the unibody MacBook Pros are incredibly sturdy.  I have heard some Microsoft people say that Windows runs best on Apple hardware - go figure.

I initially ran Windows via vmWare fusion on a  bootcamp partition and that worked great - I could run Windows in a virtual machine under Mac OS X or reboot into native windows if I really wanted the speed.  I quickly discovered that I didn't need the extra speed so I ended up running Windows under vmWare on Mac OS X for several years without any issues.  In fact, I found this setup had many advantages:

  1. I put non work stuff like iTunes, Spotify, gmail, web surfing and games on Mac OS X and kept the Windows VM purely work stuff (visual studio, source code, outlook)
  2. It is also nice to be able to surf the web in Mac OS X while Windows updates are being installed (which seem to take several minutes and happen every other week).
  3. You can test (and debug) web apps on iPhone and iPad simulators via XCode


Last April I started development of a 3D Visualization Server to integrate with cornerstone.  The server is written in C++ and uses the excellent VTK open source project.  VTK features the ability to do volume rendering using a software renderer as well as a GPU renderer.  The software renderer was working great, but I was curious how much faster it would be with a GPU.  Unfortunately my Mac Mini at home didn't have a GPU so I couldn't test it - I ended it up getting a good deal on a mint condition 2012 15" Retina Macbook Pro off of craigslist which had a GPU.

The new 2012 MBP was amazingly fast - unfortunately VTK appears to have a bug and the GPU renderer produces black images.  Interestingly enough, the GPU rendering works fine when run on a Windows virtual machine under Mac OS X (GPU rendering is blazing fast - 60 FPS at full image quality).  As time went on, I ended up building a number of virtual machines to support various consulting projects (each customer gets its own VM) and I ran out of space on the MBP's 256 GB SSD drive.  I bought a couple of MyDigitalSSD OTG drives to store the VMs (these are absolutely AWESOME - highly recommended) but I had all these cables laying around and it was disrupting my work area feng shui so I was getting the itch to upgrade again.

The key criteria for a new machine was the following:
16 GB RAM Minimum, ideally 32 GB RAM
i7 Quad Core
1TB PCIe flash storage
GPU

The only two options were a 15" Retina MBP or a MacPro.  I have actually been wanting a MacPro since they came out but they are pricey and I would still need a laptop for travel which adds even more cost.  Two separate machines also means I have to deal with data synchronization and aint nobody got time for that.  It turns out that the 15" MBP is actually faster than than the least expensive MacPro - check out the performance comparisons here.  The only issue I had with the MBP route is that the maximum RAM is 16GB which tends to disappear quickly when you have a few VMs running.  This past weekend I decided to bite the bullet and picked up a top of the line 15" MBP and absolutely love it.  It feels about 33% faster than my 2012 MBP - most of which I am guessing is from the PCIe flash drive.  Check out these numbers:

In addition to the MBP, I have an Apple Thunderbolt display, old Dell 24" monitor in portrait mode and iPad air all on top of a nice glass desk.  I use an Apple keyboard with number pad and a Logitech g500 mouse.  The MBP sits in a vertical stand which helps keep it cooler.  Here is a picture of my setup:




Thursday, September 25, 2014

Amazon AppStream - the savior for server side rendering?

Amazon AppStream provides a platform for server side rendering that promises to be interactive enough for gaming. Since gamers have high standards for interactivity, this technology could very well enable cloud based server side rendering for medical imaging.

Currently AppStream only has servers in the US East region of AWS (located in North Virginia) which means that interactivity will decrease the farther you get from the datacenter. Amazon does plan to expand AppStream to the other regions which are located in Northern California and Oregon in the US (click here for zones outside of the US). The system will automatically connect clients to the server that will provide the best experience which is really convenient for application developers. Interactivity may indeed be very good for the west coast and some parts of the east coast, but it may not be good enough for the rest of the country. Amazon could of course solve this problem by adding additional AppStream data centers across the country and this seems reasonable if the service is indeed successful.

One cool trick AppStream uses is H.264 to stream the screen from the server to the client. H.264 is one of the more popular video codecs in use today - it is used in blu-ray discs, iTunes and youtube. H.264 is computationally expensive to encode so using it for real time streaming is truly impressive. They must be doing the encoding using the GPU or perhaps even specialized hardware.

While this technology certainly has potential, the lack of encryption of the video stream will prevent it from being used for PHI due to HIPAA.  Life Sciences and medical imaging is a stated use case so hopefully they can resolve this in the future.

Note: All of the above information is based on what I gleamed from the publicly available information on the Amazon AppStream web site (specifically the FAQs)

Tuesday, September 23, 2014

Server Side Rendering

The term "Server Side Rendering" in medical imaging generally refers to executing all or a portion of the image rendering pipeline on a server and sending the resulting image to the client.  There are many benefits of server side rendering:

1. A server can usually load image data from the archive or cache much faster than the client can.  Servers are located in the data center which can have 1Gb/s+ bandwidth between each other.  A client's bandwidth will often be much lower especially if it is not in the same physical building.  In many cases a client can have at least 15 Mb/s which is still 66x slower than the datacenter.  To put this into perspective, a 1000 slice CT volume would load in ~ 4 seconds over a 1 GB/s connection but would take over 4 minutes over 15 MB/s.  The faster load time is perhaps the best reason to do server side rendering for volumetric data sets.
2. The result image is typically much smaller than the source image.  For example, a 1000 slice CT volume is 500 MB of data, yet a MPR or Volume Rendered view of that data in JPEG may only be 5 kB in size.  Another example is a 2D projection radiograph is often 30 MB in size, yet a rendered JPEG of that image may only be 30 kB in size.
3. Minimal client side hardware dependencies.  Volume rendering and MPR often require higher end hardware than is often found on client machines.  For example, a GPU based rendering engine often requires specific GPU cards and drivers to function which are not usually found on enterprise desktop PCs.
4. Minimal client side software installation.  Medical imaging software can be difficult to install to the enterprise due to the variety of operating systems and hardware found on enterprise PCs.  One strategy to deal with this is to install the software on a machine in the datacenter and allow a client to access it via a remote desktop protocol like VNC or RDP.

While server side rendering sounds great, interactively suffers dramatically when latency is present.  While bandwidth is generally not an issue for medical imaging today, latency still is.  If you think about how server side rendering works, the client has to send user activity (mouse, keyboard presses) to the server where it gets processed and then sends back the resulting image.  The time it takes to send a message from the client to the server and then back again is called latency.  In the enterprise, latency is usually quite low - around 1 ms.  Low latency like this provides a highly responsive user interface that many users cannot distinguish between local client side rendering.  Unfortunately many users need to access images outside of the enterprise where latency quickly becomes an issue.  Assuming server side rendering is instance (which is not necessarily the case), you get the following latency to best case frame rates:

25 ms ping = 40 FPS
50 ms ping = 20 FPS
100 ms ping = 10 FPS

In a MAN or WAN you may have 25-50ms which will meet most users needs.  Once you get outside of the MAN/WAN (e.g. across the country or planet), latency quickly jumps up to the 80-150 ms range.  This is a major issue for remote users as well as any cloud based system that uses server side rendering.  The major cloud vendors such as Amazon and Azure do have a number of data centers that are geographically located throughout the country (and world).  It is possible to deploy a server to each datacenter and connect the client to the server with the lowest ping times.  This doesn't happen automatically though and is something to consider when looking at any cloud based solution.



Monday, September 22, 2014

Simple C++ GZip codec with Zlib

I needed a simple gzip compress/uncompress function but was having trouble finding a good example.  I am posting this very simple C++ GZip codec using Zlib in hopes that helps others:

https://github.com/chafey/GZipCodec

PS - I discovered boost has gzip support after I coded this, definitely use that instead of my code if you can.




Thursday, September 11, 2014

WADO-RS Overview

WADO-RS was recently added to the DICOM standard in 2011 with Supplement 161.  The RS stands for REST or RESTful and is generally easier to understand and work with than WS* Web Services.  WADO-RS was mainly driven by the need to provide a way for clients to access multiple SOP Instances in one HTTP request which had been shown by the MINT project to offer significant performance gains.

One of the key concepts that WG27 took from the MINT project was the concept of bulk data.  A bulk data item is a field in a DICOM SOP Instance that is typically very large - such as the Pixel Data field 7FE0,0010.  To maximize performance, all fields for a study can be retrieved at once but bulk data fields are replaced with a URL that can be used to obtain the bulk data item via a separate request.  This strategy enables clients to stream the pieces of the study they want, when they need them.  This strategy is often used by image viewers to deliver images "on demand".  

WADO-RS provides multiple ways to access SOP Instances to support a variety of use cases and scenarios:

  1. RS – RetrieveMetadata.  This allows a client to retrieve all fields (except bulk data) for all SOP Instances in a study.  It supports both XML and JSON responses.  The JSON response is an array of objects, each of which contains all of the fields for each SOP instance in the study.  The data in bulk data fields is replaced with a URL which can be used to get the actual bulk data separately.  The XML response is a multi-part MIME message with each SOP Instance returned as a separate XML document and encoded as a single part.
  2. RS – RetrieveBulkdata.  This is the mechanism to retrieve a single bulk data item as returned in the RS-RetrieveMetadata response.  By default the bulk data is returned in little endian transfer syntax, but other transfer syntaxes can be requested (e.g. JPEG2000)
  3. RS - RetrieveFrames.  This mechanism allows a client to get all image frames for a study, series or SOP Instance in one request.  The frames are returned in a multi-part mime message with each frame encoded as single part.  By default frames are returned in little endian transfer syntaxes but other transfer syntaxes can be requested (e.g. JPEG 2000).  
  4. RS – RetrieveStudy.  This allows a client to obtain all SOP Instances for a study in one request.  Each SOP Instances is sent as a separate part in a multi-part MIME message with each SOP Instances as a DICOM P10 byte stream (application/dicom) encoded in a single part.  You can also request just the bulk data items for a study and they are returned in a multi-part MIME message with each bulk data item as an individual part.
  5. RS – RetrieveSeries.  Same as RS-RetrieveStudy but scoped to a series.
  6. RS – RetrieveInstance.  Same as RS-RetrieveStudy but scoped to an individual SOP Instance


Tuesday, September 2, 2014

DICOM WADO and WADO-URI

The DICOM standard defines web service based functionality in PS 3.18.  DICOM Working Group 27 (Web Technology for DICOM) oversees the standardization of web services.  You can read more about WG 27 in the DICOM's strategy document and access its meeting notes.  

The first web service mechanism standardized was called WADO (Web Access to DICOM Objects) which provides HTTP GET access to SOP Instances.  WADO was added via Supplement 85 in 2003.  WADO was recently renamed WADO-URI to avoid confusion with the new WADO-RS (RESTful) and WADO-WS (WS*) standards.  This industry is still coming up to speed with this new terminology, so the term "WADO" generally refers to what is now called WADO-URI as WADO-RS and WADO-WS are not widely in use yet.  

WADO-URI supports access to SOP Instances via HTTP GET.  Built into HTTP is a mechanism to request data in a variety of formats (or MIME types).  There are hundreds of MIME types defined, here are some of the more common MIME types that are behind the web pages you see when browsing the web:


  • text/html - HTML documents
  • text/css - CSS documents
  • image/jpeg - JPEG images
  • image/png - PNG images

Different types of SOP Instances can be returned as different MIME types.  For example, a single frame image SOP Instance can be rendered as a image/jpeg but multi-frame image and structured report SOP Instances cannot.  A structured report can be retrieved as an HTML document, but an image SOP Instance cannot. 

DICOM also defined its own MIME type 'application/dicom' which refers to a DICOM P10 byte stream.  By default, a requested SOP Instance should be returned in Explicit VR Little Endian transfer syntax.  Other transfer syntaxes can be requested (e.g. JPEG 2000), but the server does not have to honor the request.

Here is an example of a WADO-URI URL that requests a SOP Instance rendered as a JPEG:

http://localhost:8080/wado?requestType=WADO&studyUID=1.3.6.1.4.1.25403.166563008443.5076.20120418075541.1&seriesUID=1.3.6.1.4.1.25403.166563008443.5076.20120418075541.2&objectUID=1.3.6.1.4.1.25403.166563008443.5076.20120418075557.1

A few notes about this URL:
  1. You must know the StudyInstanceUID, SeriesInstanceUID and SOPInstanceUID.  The QIDO-RS standard will allow you to query for these via a REST call, but it is new and very few systems that support it right now.  
  2. The default MIME type is application/jpeg which returns a JPEG image the same size as the DICOM image with a server selected window/level value.  Your web browser will display this rendered image if you paste the URL into it!

While rendered images are nice for integration with web based applications, there are three disadvantages:
  1. You need to re-render the image on the server if you need to adjust the window width or window center
  2. You don't have access to any other DICOM Fields (e.g. patient name, patient id, study description, etc)
  3. The image is not diagnostic due to encoding as a JPEG Lossy image (this can be avoided if the server supports rendering to the image/png MIME type)

To address these issues, you can request the SOP Instance as a DICOM P10 byte stream.  To do this, you need to request that it be returned with the application/dicom MIME type using the contentType parameter.  Here is an example of this for the same SOP Instance used above:

http://localhost:8080/wado?requestType=WADO&studyUID=1.3.6.1.4.1.25403.166563008443.5076.20120418075541.1&seriesUID=1.3.6.1.4.1.25403.166563008443.5076.20120418075541.2&objectUID=1.3.6.1.4.1.25403.166563008443.5076.20120418075557.1&contentType=application%2Fdicom



Note that the '/' in 'application/dicom' is converted into %2F due to conform with URL encoding rules.  Diagnostic viewers will typically use the application/dicom access method to avoid the three issues listed above with the image rendered mode.  One drawback to the application/dicom access method is that the responses are typically much larger.  An uncompressed 256x256 MRI image is 128K bytes while a JPEG rendered version may only be 3k!   

Working around CORS

While building web applications, I sometimes run into the case where my http requests fail because the web server does not support Cross-origin resource sharing or CORS.  One way to workaround this by using a http proxy which adds the missing CORS header to the response.  This can be easily done with Node.js and the http-proxy library.  Here is the script that will create a proxy that listens on port 8000, proxies each HTTP request to localhost:8042 and return the response with the CORS header added:

var http = require('http'),
    httpProxy = require('http-proxy');

var proxy =  httpProxy.createProxyServer({target:'http://localhost:8042'}).listen(8000);

proxy.on('proxyRes', function(proxyReq, req, res, options) {
  // add the CORS header to the response
  res.setHeader('Access-Control-Allow-Origin', '*');
});

proxy.on('error', function(e) {
  // suppress errors
});

You can also workaround this by disabling CORS in your browser.  For example, chrome can be started with the --disable-web-security flag.