Recently there has been an incredible amount of excitement about applying blockchain technology to healthcare. This weekend I attempted to try and move beyond the hype and find a tangible application of blockchain in healthcare that I could fully understand. Despite many hours of research, I am disappointed to report that most of what is out there is hype or in development - there are very few (if any) actual applications.
In addition to lack of real world applications, many of the blockchain healthcare projects require significant changes to the core blockchain technology used in bitcoin. This is concerning because blockchain is still quite new and the best example we have of it working is bitcoin itself. Bitcoin is quite delicate and the developers of bitcoin are extremely cautious with any changes to it. Applying the core blockchain technology to healthcare will of course require changes but such changes are very risk and should be done with caution or we risk losing the key benefits that are promised.
Given this, I thought it would be useful to understand bitcoin's use of blockchain before trying to find opportunities for applications in healthcare. This way we can filter applications in healthcare against the features bitcoin already provides so we can transfer our confidence in bitcoin to our healthcare application of blockchain.
Benefits
* Immutability - Once data is stored on the blockchain you can be confident that it will never be changed. This is possible because blockchain is essentially an append only transaction log which resides on many machines.
* Durability - Once data is stored on the blockchain, it will never be lost. This is accomplished by duplicating data across many machines - traditional backup/restore mechanisms are neither applicable or needed.
* Reliability - Once data is stored on the blockchain, it can be retrieved at any point in the future. This is possible because the data is stored on many machines and data access requires access to just one of the machines.
* Transparency - Once data is stored on the blockchain, it is visible to everyone. This is possible because blockchain does not implement any form of authentication or authorization to access the data.
* Anonymous - Anyone can put data on the blockchain. This is possible because blockchain does not implement any form of authentication or authorization
* Integrity - Blockchain transactions are validated against a defined protocol. This protocol can include logic such as ensuring validity of the actors involved in the transaction via public key cryptography and validation of the data in the transaction.
What opportunities are there in healthcare for applying these blockchain benefits? The first thing that comes to mind is integrity of healthcare records which I will describe in a subsequent blog post.
Blockchain is still quite new to me so feel free to leave a comment if I got something wrong. Questions are always welcome as well!
Monday, October 31, 2016
Thursday, September 29, 2016
JPEG2000 performance numbers on cornerstone
At yesterday's OHIF Community meeting I reported a 3x speedup in JPEG2000 performance with the latest dev branch of cornerstone and that we are now within 50% of native performance. The speedup really surprised me and I can't really explain it other than there must have been many speed improvements in OpenJPEG 2.1.1. After this presentation OpenJPEG 2.1.2 was released which has additional speed improvements and additional ones are currently in development. The future for JPEG2000 performance is looking very good.
Native:
JavaScript (OpenJPEG 2.1.0 + EMSCRIPTEN 1.35.0):
JavaScript (OpenJPEG 2.1.1 + EMSCRIPTEN 1.35.0):
Image: 3063x4664 MG JPEG2000 Image (MG1 from dclunie’s compression samples)
Test Environment: MacBook Pro (Retina, 15-inch Mid 2014 2.8 GHz Intel Core i7 macOS Sierra):
Native:
Kakadu Speedpack (vs7_8-01480C) decompress with 8 threads - .259 seconds
Kakadu Speedpack (vs7_8-01480C) decompress with 1 thread – 1.186 seconds
Kakdu (v7_8-01480C) decompress with 8 threads - .345 seconds
Kakdu (v7_8-01480C) decompress with 1 thread – 1.7862 seconds
OpenJPEG (master) decompress with 1 thread – 4.165 seconds
OpenJPEG (master) decompress with 8 threads – 2.113 seconds
OpenJPEG (2.1.1) decompress with 1 thread – 4.847 seconds
Grok (master) decompress with 1 thread– 2.810 secondsJavaScript (OpenJPEG 2.1.0 + EMSCRIPTEN 1.35.0):
Cornerstone (master) on Chrome 53 – 20.370 seconds
Cornerstone (dev) on Chrome 53 – 6.693 seconds
Cornerstone (dev) on FireFox 48 – 6.832 seconds
Cornerstone (dev) on Safari 10 – 8.040 seconds
Cornerstone (dev) on Opera 39 – 6.882 seconds
Cornerstone (dev) on Chrome 53 in Windows 7 VM – 7.899 seconds
Cornerstone (dev) on IE 11 in Windows 7 VM – 17.832 seconds
Cornerstone (dev) on Edge in Windows 10 VM – 11.994 secondsWednesday, September 14, 2016
JPEG2000 Decoding Performance
The OpenJPEG project just integrated a PR which added support for decoding across multiple threads. I decided to do a quick performance comparison to see where things are:
3063x4664x16 bit grayscale JPEG2000 Image (MG1_J2KR) from dclunie’s compression samples here:
On my MacBook Pro (Retina, 15-inch Mid 2014 2.8 GHz Intel Core i7 macOS Sierra):
Kakadu (v7_8-01480C) decompress with 8 threads - .345 seconds
Kakadu (v7_8-01480C) decompress with 1 thread – 1.7862 seconds
OpenJPEG (master) decompress with 1 thread – 4.165 seconds
OpenJPEG (master) decompress with 8 threads – 2.113 seconds
OpenJPEG (2.1.1) decompress with 1 thread – 4.847 seconds
Grok (master) decompress with 1 thread– 2.810 seconds
Quick Summary:
*) Kakadu is still king. They have a new speedpack add on that supposedly improves decode performance 50%
*) Grok has made some strong performance improvements over OpenJPEG
*) The OpenJPEG multi-core work is a fantastic enhancement
I also ran numbers for cornerstone which I will be presenting at the OHIF Community meeting Sept 28 which everyone is invited to attend:
Chris
Saturday, September 10, 2016
JPEG2000 vs JPEG-LS Decode Performance in the Web Browser
In a previous post, I reported that JPEG2000 is a performance bottleneck in medical imaging systems. To help provide some data to back this claim up, I ran some quick performance comparisons between the two codecs using cornerstone. Here are the results:
Test Platform:
MacBook Pro (Retina, 15-inch Mid 2014)
Processor: 2.8 GHz Intel Core i7
OS: MacOS Sierra GM
Browser: Google Chrome 52.0.2743.116 (64-bit)
Test Harness:
http://rawgit.com/chafey/cornerstoneWADOImageLoader/dev/examples/dicomfile/index.html
Test Data:
ftp://medical.nema.org/ MEDICAL/Dicom/DataSets/WG04
Test Results:
CT2 Image
JPEG2000: 133ms
JPEG-LS: 14ms
MG1 Image :
JPEG2000: 6505 ms
JPEG-LS: 752 ms
In this test, JPEG-LS is about 10x faster at decoding than JPEG2000 in cornerstone. JPEG2000 decode speed should improve in time as OpenJPEG performance improves and Web Assembly is implemented by web browsers. Performance is likely to vary between web browsers and versions as well.
Update: For reference purposes, here are the compressed sizes of the images used in the above tests:
CT2_J2KR: 119,540 bytes
CT2_JLSL: 115,504 bytes (~4% better)
MG1_J2KR: 12,230,378 bytes
MG1_JLSL: 12,019,930 bytes (~2% better)
Test Platform:
MacBook Pro (Retina, 15-inch Mid 2014)
Processor: 2.8 GHz Intel Core i7
OS: MacOS Sierra GM
Browser: Google Chrome 52.0.2743.116 (64-bit)
Test Harness:
http://rawgit.com/chafey/cornerstoneWADOImageLoader/dev/examples/dicomfile/index.html
Test Data:
ftp://medical.nema.org/
Test Results:
CT2 Image
JPEG2000: 133ms
JPEG-LS: 14ms
MG1 Image :
JPEG2000: 6505 ms
JPEG-LS: 752 ms
In this test, JPEG-LS is about 10x faster at decoding than JPEG2000 in cornerstone. JPEG2000 decode speed should improve in time as OpenJPEG performance improves and Web Assembly is implemented by web browsers. Performance is likely to vary between web browsers and versions as well.
Update: For reference purposes, here are the compressed sizes of the images used in the above tests:
CT2_J2KR: 119,540 bytes
CT2_JLSL: 115,504 bytes (~4% better)
MG1_J2KR: 12,230,378 bytes
MG1_JLSL: 12,019,930 bytes (~2% better)
Friday, September 9, 2016
JPEG-LS - The good, the bad and the ugly
Following up on my previous post on JPEG2000 - the good, the bad and the ugly, I figured I would do the same for JPEG-LS which is another compression algorithm standardized by DICOM that is not well understood. JPEG-LS is based on the LOCO-I algorithm developed by HP which happens to work very well for medical images. Here is a high level block diagram of the LOCO-I algorithm:
The Good
The Good
- JPEG-LS delivers one of the highest lossless compression ratios. It often beats JPEG2000 by a small margin
- JPEG-LS supports a variety of pixel formats - 8 bit gray, 16 bit gray, 8 bit color
- JPEG-LS supports lossy and lossless encodings.
- JPEG-LS has a well supported open source C++ implementation named CharLS. I have done an EMSCRIPTEN build of CharLS to javascript for use with cornerstone.
- Most applications supporting JPEG-LS are presumed to be using CharLS. Since most people use the same library, there should be few interoperability issues
- JPEG-LS is extremely fast at encoding and decoding
- JPEG-LS is simpler than other compression algorithms (e.g. JPEG2000) and is therefore easier to implement
The Bad:
- Most applications supporting JPEG-LS are presumed to be using CharLS. Bugs in the implementation may exist and be unknown since most applications use the same library
- JPEG-LS has limited support for controlling image quality in lossy encoding mode
- JPEG-LS has no support for progressive downloading
- JPEG-LS depends on IP held by HP. HP allows use of its IP for implementations that meet certain requirements
The Ugly:
- JPEG-LS is not widely adopted in the industry.
NOTE: Corrections and clarifications are always appreciated, I will update this article as I receive them. Thank you
Thursday, September 8, 2016
JPEG 2000 - The good, the bad and the ugly
JPEG2000 has emerged as the preferred transfer syntax for archiving medical imaging. If you ask most people why JPEG2000 is so popular, most will state that it offers the highest lossless compression ratios. While images can be archived using lossy compression algorithms (which feature higher compression ratios), most have elected to archive using lossless to avoid any possible risk associated with image degradation that occurs in lossy image compression. Compression ratio is important because it directly impacts storage costs to archive.
While some believe that JPEG2000 is the ultimate compression algorithm, this isn't necessarily true for everyone - there are some bad and ugly aspects to JPEG2000 that are not well understood:
The Good:
While some believe that JPEG2000 is the ultimate compression algorithm, this isn't necessarily true for everyone - there are some bad and ugly aspects to JPEG2000 that are not well understood:
The Good:
- JPEG2000 delivers one of the highest lossless compression ratios.
- JPEG2000 supports progressive image transmission through several mechanisms
- A discrete wavelet transform is applied to the pixel data which breaks the image down into multiple resolution levels. Lower resolution versions of the image can be viewed with a portion of the entire JPEG2000 encoded bitstream
- Wavelet coefficients are encoded as bitplanes and can therefore be decoded a bitplane at a time
- Certain pixels (regions of interest) can be encoded before others allowing them to be displayed first
- JPEG2000 supports a variety of pixel formats - 8 bit gray, 16 bit gray, 8 bit color (and others not typically used in medical imaging)
- JPEG2000 supports lossy and lossless encodings
- JPEG2000 delivers excellent image quality in lossy compression mode when compared to other compression algorithms
- JPEG2000 combined with JPIP is a very effective way of delivering images from a server to a client and is a DICOM standard
- There are several open source implementations of JPEG2000 including some GPU accelerated versions. Intel has low level libraries that can be used to decode JPEG2000 using the advanced features found on their modern CPUs
- JPEG2000 is a supported image encoding for PDF files
The Bad:
- New compression algorithms have been developed that claim to beat JPEG2000 in compression ratio, encode speed and decode speed (e.g. WebP, BPG, JPEG-XR, FLIF)
- Very few products have leveraged the advanced features of JPEG2000 due to the associated complexity.
- JPEG2000 has not been widely adopted in the technology industry.
- Popular frameworks such as .net, java and node.js have poor or non existent support for JPEG2000.
- Most web browsers are unable to natively display JPEG2000 images. Safari has support but access to 16 bit gray data is unknown
- Few digital cameras are capable of encoding JPEG2000 images.
The Ugly
- JPEG2000 is an extremely complex algorithm which requires a high level of expertise in image compression and software engineering to implement.
- There are very few SDK implementations of JPEG2000 in the world - my guess is there are less than five complete SDK implementations and less than 20 unique SDK implementations. Lack of SDK implementations makes it difficult for developers to add support for JPEG2000
- Not all implementations implement all features. JPIP in particular is only implemented by two or three libraries
- Interoperability issues have occurred due to misinterpretation of the standard or bugs in the implementation
- JPEG2000 is extremely CPU intensive. While CPU power is plentiful today, JPEG 2000 encoding and decoding is often the main bottleneck experienced in medical image systems.
Have questions or interested in learning more about JPEG2000 or image compression? Post a question or suggestion for another article!
Update: Mathieu Malaterre provided the following link to JPEG2000 implementations
Update: Mathieu Malaterre provided the following link to JPEG2000 implementations
Monday, February 8, 2016
Big Endian vs Little Endian
Today I decided to add support for explicit big endian images to cornerstone. Sometimes I run into images that don't render properly and wonder if this is due to a byte swap issue. To help me remember what a byte swapping issue looks like, I figured I would take screen captures with and without the bytes swapped. Here is the image displayed properly:
Subscribe to:
Posts (Atom)