Big Data Analysis Interview with Andreas Ribbrock, Team Lead Big Data Analytics and Senior Architect at Teradata GmbH, is online now:
In his interview Andreas talked about three classes of technologies required for Big Data: storage (advocating distributed file systems as a competitive way to handle these); query frameworks which can translate from user queries to a set of different query engines (calling it a 'discovery platform'); and a platform which can handle the delivery of the right results to the right personnel in the right time frame.
Andreas also stressed that integration is key as Big Data can not be solved by any single technology but requires a suite of technologies to be tightly integrated. In general, any architecture/framework for Big Data must be open and adaptable as new technologies/components are plugged in. Fabric computing where components are virtualized and allow data flow at high speeds was a possible approach to solve this.
In terms of impact two key drivers are the ability for Big Data to allow companies to personalise their communication with clients and also how user communication channels will change. On the one hand, one can integrate channels for energy consumption, phone use, banking. On the other, users may prefer their own channels (which produce a lot of data) and impose these on enterprises in specific markets. e.g. traditional banks may soon become obsolete as their functionality is taken by PayPal (a TeraData customer), Amazon and Google.
He ended the interview with the phrase: Big Data is Big Fun!