The internet is virtually everywhere. Web 2.0 is so 2010, we are now in the age of everything, the internet of everything. It is trying to get in our homes, our cars maybe even our food. In the not too distant future the virtual world and the physical world will be inextricable. Soon it will get under our skin, literally.
However, will the real world and the virtual world be able to communicate? They speak different languages, one hypertext and the other 6,500 tongues. They have incompatible personalities, one has physics and the other has protocol; one is emotional, the other coldly logical. It can only end one way, badly.
But put away those Skynet visions because there are erudite folks out there already looking for answers to this futuristic quandary.
This is something our own team of data scientists at Adoreboard based at Queen’s University often asked ourselves: What does the future of data look like? How do we visualise the proliferation of information that will inevitably come from the internet of things? So we teamed up with a few friends to experiment.
Having identified and analysed Twitter feeds from a range of brands and individuals, detecting more than 20 types of human emotions we set about producing a music track. The analysed data was passed on to Patchblocks, who first rendered the beats, then Ministry of Sound arranged for Sheldon to produce and mix them into a unique sound, demonstrating how online emotions like joy and anger could be translated into the language of music. It was then presented and mixed at the closing party for the Field.work Data Festival in London.
The data got its groove on – so to speak.