Jump to Navigation

Data Usage

BIG Final Event Workshop

Programme of the BIG Final Event Workshop co­located with ISC Big Data in Heidelberg 

 

The Big Project

Welcome and Introduction Nuria De Lama (ATOS Spain)  

 

Key Technology Trends for Big Data in Europe 

Edward Curry (Research Fellow at Insight @ NUI Galway) 

presentation

 

The Big Data Public Private Partnership 

Nuria De Lama (ATOS Spain)

presentation 

 

Panel discussion about a common Big Data Stakeholder Platform 

Martin Strohbach (AGT International)  

● The PPP Stakeholder Platform

(Nuria De Lama

● Hardware and Network for Big Data

(Ernestina Menasalvas, RETHINK BIG EU Project) 

● Tackling BIG DATA Externalities

(Kush Wadhwa, Trilateral Research BYTE EU Project) 

● The value of the Stakeholder platform

(Sebnem Rusitschka, Siemens, BIG and BYTE Project) 

 

Networking and Break-out Sessions

 

update will follow

 

 

D2.2.2 Final Version of Technical White Paper available

The final version of the technical whitepaper of deliverable 2.2.2 is available now. It details the results from the Data Value Chain Technical Working groups describing the state of the art in each part of the chain together with emerging technological trends for exploiting Big Data. It is an amalgamation of the results of the challenges in regards to big data in different sectors and working groups. The Data Value Chain identifies activities in Data Acquisition, Data Analysis, Data Curation, Data Storage and Data Usage.
A member of the BIG Health forum comments: "We interviewed experts in the biomedical domain to ask for their opinion about the current situation in data acquisition and data quality. We identified challenges that need to be addressed for establishing the basis for BIG health data applications. Additionally, the current data quality challenges were diagnosed and are reported in the deliverable."

Smart Ways To Deal With Big Data

BIG-project member Siemens recently published an article on smart ways to deal with big data.
The article was released shortly before the Christmas Holidays in the December edition of urbanDNA, a magazine for the metropolitan world.The article briefly introduces the BIG project, the work done in BIG and highlights smart ways of  urban applications of big data.
The article is available in the December Issue (No.3) of urbanDNA.
You find further information in big data and the BIG project on our website.

The BIG project at Big Data World Congress, Munich, 3-4 December 2013

The BIG project had a strong presence at BIG Data World Congress in Munich in early December. There was a strategically-positioned stand  in the exhibition hall. We met a number of delegates from many industrial sectors and countries, especially in the “speed dating” session where we perfected the BIG project’s elevator pitch in the quick-fire conversations! Project flyers and stickers were available in many places for people who wanted to learn about the project after the conference. The two day event was closed by a presentation from the BIG project’s director Josema Cavanillas, introducing the aims of the project and the outputs of our research.
 
The event featured case studies and panels on every aspect of Big Data technologies including governance, unstructured data, real-time analytics and much more. Attendees came from a wide range of organisations, including some big players in sectors such as manufacturing and telecoms. One exciting potential avenue of collaboration may be for BIG to work with the USA’s NIST (National Institute of Standards and Technology) as they are also developing cross-sector consensus requirements and roadmaps for Big Data.
 
Many speakers talked about how adopting Big Data could revolutionise the ways businesses operate, driving efficiency and faster product development. It is recognised by most if not all senior level executives as one of the key IT trends of the next few years - but this comes with the caveat that Big Data initiatives need to be aligned to clear outcomes and business processes in order to have a chance of success. The structure of organisations may need to be adapted to enable technical and business expertise to work together more closely to enable value to be derived from data. Even then, the pace of industry change may be such that organisations will look to form partnerships with start-ups and universities so as to drive innovation. The BIG project’s Public Private Forum could be a key enabler for these communities.
Europe-specific issues were highlighted in several talks. There was criticism of the apparent risk aversion of technology companies and their customers and the lack of a widespread start-up culture (apart from a few isolated exemplars). There are differences between Europe and the US in terms of data protection, the EU’s tougher legislation possibly being a barrier to innovation for some firms (on the other hand, the US’s relatively lax laws may have implications for privacy and the ethics of extensive data collection by businesses).
 

eHealth Interview with Marco Viceconti

Sonja Zillner (Siemens) and John Domingue (STI) conducted an Interview with Marco Viceconti- a ‘Sector Visionary’ within the eHealth sector. Marco’s main work over the last few years has been leading the VPH Institute (Virtual Physiological Human) which has been the large initiative within the eHealth unit as well. The overall goal of this work has been on combining data and computational models from across Europe to instantiate a complete in silico model of the human biological system from the whole body level down to the cell level.

During the interview, the most interesting concepts were:  

  • a k-anonymity membrane - where on one side we have specific patient data which must be controlled and on the other anonymised versions which can be used for research. Sometimes one needs to pass data back to a specific patient because of analysis carried out in research. Having a two way membrane would be useful.
  • a eHealth data value chain - at the moment health data which is passed around is 100% observational (e.g. blood pressure or heart rate) being able to smoothly combine this with data resulting from computational models would be of great benefit.

You can watch the whole interview via the standard interface, as a small version or  with a large video window with the ability to jump to specific segments. 

The interview is also available as an audio-only mp3 file

For further information about our project or information about big data and the Health sector, please visit our website.

Data Curation Insights - Interview with Paul Groth

The first interview of the Data Curation Insights-series  is now available on our website.
Edward Curry, BIG-member and member of the  Digital Enterprise Research Institute at the National University of Ireland, Galway, arranged an interview with Paul Groth, Associate Professor in the Web Media Group at the VU University Amsterdam, about big data and data curation.
The interview sheds light on topics revolving around the role of big data in the context of data curation. Pauls Groth reports on his research at the VU Amsterdam, about the type of data that is curated , the size of data sets, the processes and technologies that are used for data curation and the number of users that are involved in the curation processes.
He describes not only  his viewpoint on the influence big data will have in future data curation but also on the technical demand of curation and curation technologies in the big data context.. Within respect to big data and data curation Edward Curry also wanted to know Paul's oponion about about which technologies will cope with big data 
The full interview is available on our website:
 
Interested in data curation news? Follow  Edward Curry @EdwardACurry, Paul Groth @pgroth or our BIG-project @BIG_FP7 on Twitter.
 
 

Interview with Andreas Ribbrock Team Lead Big Data Analytics and Senior Architect at Teradata GmbH

Big Data Analysis Interview with Andreas Ribbrock, Team Lead Big Data Analytics and Senior Architect at Teradata GmbH, is online now:

In his interview Andreas talked about three classes of technologies required for Big Data: storage (advocating distributed file systems as a competitive way to handle these); query frameworks which can translate from user queries to a set of different query engines (calling it a 'discovery platform'); and a platform which can handle the delivery of the right results to the right personnel in the right time frame.

Andreas also stressed that integration is key as Big Data can not be solved by any single technology but requires a suite of technologies to be tightly integrated. In general, any architecture/framework for Big Data must be open and adaptable as new technologies/components are plugged in. Fabric computing where components are virtualized and allow data flow at high speeds was a possible approach to solve this.

In terms of impact two key drivers are the ability for Big Data to allow companies to personalise their communication with clients and also how user communication channels will change. On the one hand, one can integrate channels for energy consumption, phone use, banking. On the other, users may prefer their own channels (which produce a lot of data) and impose these on enterprises in specific markets. e.g. traditional banks may soon become obsolete as their functionality is taken by PayPal (a TeraData customer), Amazon and Google.

He ended the interview with the phrase: Big Data is Big Fun!

BIG at LSWT2013 - From Big Data to Smart Data - A Summary

The 5th Leipziger Semantic Web Tag (LSWT2013) was organized as a meeting point for german as well as international Linked Data experts.
Under the motto: From Big Data to Smart Data sophisticated methods that enable handling large amounts of data have been presented on September 23th in Leipzig.
The keynote was held by Hans Uszkoreit, scientific director at the German Research Center for Artificial Intelligence (DFKI). By being introduced  to Text Analytics and Big Data issues the participants of the LSWT 2013 discussed the intelligent usage of huge amounts of data in the web.
 
Presentations on industrial and scientific solutions showed working solutions to big data concerns. Companies like Empolis, Brox and Ontos presented Linked Data and Semantic Web solutions capable of handling terabytes of data. However, also traditional approaches, like Datameer’s Data Analytics Solution based on Hadoop pointed out that big data could be handled nowadays without bigger problems.
 
Furthermore, problems detecting topics in massive data streams (Topic/S), document collections (WisARD) or corpora at information service providers (Wolters Kluwer) were tackled. Even the ethical issue of robots replacing journalists by the help of semantic data has been examined by Alexander Siebert from Retresco.
 
In conclusion, the analysis of textual information in large amounts of data is an interesting and so far not yet fully solved area of work. Further Information are available from the website.
 
 Further information on topics related to data analysis, data curation, data storage, data acquisition and data usage can be found in our technical whitepaper available from our project website.

BIG Partner AGT International and Crowd Control Management

The Urban Shield Safe City solution of AGT International, a partner in the BIG project, has been featured  on Big Data Startups as part of a crowd control management solution in the city of Enschede.

The system has been deployed during the Dutch radio station 3FM event, an annual benefit project that collects money for charity. According to Big Data Startups, over 6 days around 500,000 visitors came to the centre of Enschede. The objective was to support the police and other first responders with safety and security solutions during the event and to showcase the city’s leading and innovative approach towards meeting these requirements.

The image above shows the Urban Shield system that “provides a real-time situational awareness overview of a complete area within a city. This system is based on a Geographical Information System and uses GPS to show the real-time location of all first responders in an area. All police officers, fire department, city security and private security guards who are part of the system are shown on a map. Based upon a situation that is noticed via the cameras on the street or via Twitcident the closest first responder can be alerted and he or she can take immediate action” AGT Press Release.

Further information are available at: AGT International and Big Data Startups

 

 

 

Big Data Analysis Interview with Steve Harris Chief Technology Officer at Garlik, an Experian Company

steve_harris

Check new Big Data Analysis Interview with Steve Harris, Chief Technology Officer at Garlik, an Experian Company available in the following formats:

The company that Steve is associated with has as its main focus the prediction and detection of financial fraud through the use of their customised RDF store and SPARQL. They harvest several terabytes of raw data from chat-rooms and forums associated with hackers and generate around 1B RDF triples based on this. In terms of areas that need work Steve's suggestion was the optimisation of the the performance of these stores. We also discussed the need to make sure that the infrastructure was economically viable and that training of staff to use RDF/SPARQL was not a big issue.

Steve Harris is a lead design and development of a multi million user product in the financial services industry at Garlik, Experian Company. In the Semantic Web community, he is widely regarded as the architect of Garlik's open source, scalable RDF platform, 5store, and has served on the World Wide Web Consortium (W3C) working groups that defined the SPARQL query [1].

Pages

Cialis sales are available on many trusted Internet sites. In humans, cialis has no effect on bleeding time when taken alone or with aspirin.

Subscribe to Data Usage


Main menu 2

by Dr. Radut