We’ve just posted an update on training and have an example of how we use some software to make easy to follow video user guides. You can find the new page here – or watch the sample system[read more]

For the last few months, I’ve been working with a brilliant coffee vending business near Gatwick. In terms of culture and atmosphere, it’s quite a contrast to BOTH other venture capital[read more]

The results of Google’s “Project Oxygen” have been reported all over the place in the last week. The London Evening Standard and the New York Times both carried this, follow the[read more]

Visualising Big Data

Some years ago, I found myself responsible for some of the biggest compute farms in the world and the software platforms that drove them. Their purpose was to make sense of the vast quantities of data emerging from the Human Genome projects. Each quarter we had to calculate a huge number of relationships between the sequence and structure of every protein known to man. That data was then combined into an Oracle database that we licensed to other biotech’s and global pharmaceutical organisations along with the tools to analyse it. 25% compound growth in data each run (that’s 140% growth per annum) created all sorts of architectural and technical challenges that impacted software and database design, and hardware performance. We migrated most of our infrastructure to hosted cloud type services as the most effective way to manage the scale of infrastructure growth. But some of the bigger challenges were around how to present the resulting data to a wide variety of research scientists around the globe.

Bespoke visualisation tools, database design and tuning and web design that could perform when delivering analysis of database tables of 25 billion rows – all had to work. The power of that data was its potential to identify new drug targets and the compounds that might act on them – but  without visualisation and analysis tools, finding the patterns that could lead to the discovery of a new drug or drug target (a protein that a drug just might do something to) was highly unlikely.

Ten years later, Forbes forecast of 2017 Technology Trends highlights Big Data – or more accurately “Humanized Big Data. (visual, empathetic, qualitative)” as an area that will see major advances.  During 2017, if  Forbes is correct, it won’t just be scientists that need to innovate around integrating human friendly tools – like Spotfire and Qlik that accurately bring disparate complex large volume data into focus.  So the question for all of us responsible for IT, is what are we doing this year to organise and manage our enterprise data better AND to innovate around how to make the data truly useful and accessible?