Group 2ArkivNSCC_LogoLinkedIn Companyarrow forwardGroup 68 Copy 13Group 68 Copy 13ListFacebook-04fbPage 1Group 125Group 68 Copy 6Group 68 Copy 12Group 126Loupe CopyPage 1Group 68 Copy 6Group 68 Copy 6twitterList

Presentation Abstracts

SPDM Online 2020 November 9th - 13th

------------------ Monday November 9th. ------------------


11:00 Cloud based data repositories: What are the implications for subsurface data?
- Joseph Nicholson, Chief Operating Officer, Osokey
What are your ambitions for subsurface data after peak oil? How does this fit with the current themes of energy transition, digital transformation, data democratisation? A look at what has occurred and can happen next.

Abstract: The exploration and production industry has huge volumes of raw, processed and analytical subsurface data which it is struggling to manage, integrate and visualise. To address this many companies are embarking on digitalisation strategies and a common part of this process is evaluating the benefits of creating seismic data lakes in the cloud. This is often driven by cost saving measures, with 10TB of cloud-based data storage costing as little as $250/week in comparison to traditional costs in excess of $750/week. Alongside data storage cost efficiencies, this paper will also describe integrations into everyday subsurface workflows including the impact of cloud technology on typical subsurface activities such as;
Seismic data loading and QC - Finding and viewing data - Analysing pre-stack seismic data.
Additionally, data stored in the cloud can be connected with familiar, traditional applications and create new ways of remotely collaborating which has immediate applications for peer reviews, data rooms and internationally distributed subsurface teams in the current pandemic. From a data management perspective, by incorporating web-based workflow tools, a single version of data in the cloud can be analysed by multiple parties removing unnecessary data duplication and data movement from traditional workflows whilst connecting a diverse array of expertise with the data. The learnings shared are designed to inform and assist other subsurface data custodians whether they are working with energy companies, national data repositories, service companies or academia.


16:00 Data management challenges for CO2 storage – same, same or different?
- Anne-Kari Furre, Advisor Reservoir Geophysics, Equinor
CCS (Carbon Capture and Storage) is gradually becoming more wide-used. I will here discuss some of the data management challenges for deep geological storage of CO2.

Abstract: CCS (Carbon Capture and Storage) is an important instrument in the world’s toolbox for reducing CO2 emissions to the atmosphere. As several large-scale storage projects are maturing, there is a need to operate and monitor these deep geological storage sites. Many of the challenges are similar to the familiar challenges of operating hydrocarbon field production, however there are some differences, related to size, time scales and storage operators’ responsibility. We will in this presentation discuss these issues and show how data management and accountability are important for the license to operate. Key challenges include handling continuous data streams from fibre-optic downhole sensing, handling large datasets from passive seismic monitoring, and preserving sufficient data subsets for long-term evaluations by multiple stakeholders.


------------------ Tuesday November 10th. ------------------


11:00 The Data Management of Context
- Martin Storey, Senior Consultant, Well Data QA
Preserving petroleum data in turbulent times

Abstract: The oil and gas industry is experiencing a “data-everything” hype at the same time as a sudden requirement for workplace reorganisations and dramatic cost reductions. This unforeseen coincidence creates opportunities as well as significant risks for the geotechnical data and the knowledge workers - hence for the industry itself. We are all exposed, but not powerless. The presenter will combine personal observations and some characteristic examples from both within and outside this industry to discern what currently available resources should be concerned with. The objective is to invite the audience to recognise and seize opportunities but beware the risks, in their own circumstances.


16:00 The Long View
- Steve Hawtin, Director, White Turret Ltd
How past industry initiatives inform the present

Abstract: OSDU has active support from major oil companies, is focused on creating open standards, and, is attracting innovative and knowledgeable contributors. In the 1990s the POSC and OpenSpirit initiatives possessed exactly these same elements, but both failed to deliver their main goals.
What lessons can be drawn? What insights should modern data managers take from them to improve today’s delivery?


------------------ Wednesday November 11th. ------------------


11:00 Can Elasticsearch help us access large Oil & Gas datasets more efficiently?
- Paul Gibb, Business Development & Account Manager, Petrosys
As E&P companies strive for efficiency and as other industries seek to make use of data already collected from years of Oil & Gas activities, can Elasticsearch (the world’s leading open source search and analytics solution) play a pivotal role in making sense of all the data? We present our findings of working with Elasticsearch on Oil and Gas datasets so far.

Abstract: Elasticsearch is an open-source search and analytics engine.Its broadly distributable, readily-scalable, enterprise-gradesearch engine has been transforming the search capabilitiesof many major organisations worldwide in almost allindustries where ‘big data’ presents mission criticalchallenges.
Data collected in the Oil & Gas Industry are an excellent example of such data. Currently there are two broad purposes that the data collected during Exploration & Production activities serve:
Firstly, the data reduce the cost, risk and time taken to explore and produce hydrocarbons, Secondly, other industries which have less capital to invest can benefit greatly by accessing the Oil & Gas data already collected.

Both of these purposes assume that the data can be made available to the end user quickly and in a meaningful way – enter Elasticsearch.
Building on some practical work done for an operator in Oman, we tested the ability for a crawler to read over 1.2 million E&P-specific records and write to Elasticsearch before viewing the data in Kibana through a web browser. The crawler read data from a custom Intranet installation, a SQL Server export, SharePoint and an internal File System to turn it into a series of records. The records were then parsed into a series of processes and Elasticsearch provided the capabilities to:
- Quickly search over document content, including content extracted using OCR
- Extract common keywords from documents with matching categories
- Store a model that can be used by the machine learning process to categorise documents
- Identify and updated matching document categories based on the stored mode

The test case showed that Elasticsearch was fast, reliable and the outputs could be easily understood and used by geoscientists in the E&P industry, or in emerging industries such as CCS or geothermal which rely on historical data. The learnings from here are now ready to be implemented into a full-scale system.


16:00 Transition to the future with the OSDU Data Ecosystem
- Phillip Jong, Manager Data Foundation Design and OSDU Chair, Shell Global Solutions Inc.

Abstract: The Oil and gas industry is facing a huge challenge exacerbated by reduced energy demand and increased energy supply. The leading operators are accelerating their digital transformation to improve their resilience and reshaping their portfolio to include new energy sources. An industry standard open data ecosystem that manages production data from all energy sources will address this challenge and increase the speed of innovation.


------------------ Thursday November 12th. ------------------


11:00 Detecting and segmenting tabular data in unstructured documents
- Henri Blondelle, CEO, AgileDD
How to combine classical Machine Learning (ML), Convolutional Neural Network (CNN) and Weighted Finite-State Transducers can help solving this difficult issue.

Abstract: An important activity of exploration or reservoir geoscientists is to collect past data, combine them with new measurements and new hypothesis to develop new models on which their decisions will be based. Despite the progress of corporate data bases and companies’ data lakes, the data needed by the geoscientists are frequently stored in unstructured formats such as PDF or scan of paper reports.
In the case of core measurements or PVT analysis it obliges the geoscientists to retype large number of values printed in tabular format into spreadsheet. This task is very time consuming and frequently not done exhaustively because of its cost.
AgileDD has been sponsored by 6 large O&G organizations (TOTAL, Technip, Saipem, Schlumberger, Subsea7 and IFPEN) to explore solutions allowing an automation of this task and to publish the results as a library in the opensource domain.
Using an approach similar to the natural speech detection, AgileDD has developed a solution based on several Artificial Intelligence methods (Machine Learning, Convolutional Neural Network, Viterbi algorithm) to locate accurately the tabular structures in unstructured documents. These tools have been integrated into Tabula, an opensource application, to have together a table detection and a segmentation solution into an easy to use application. This paper will detail this experience and the obtained results on a set of several thousand PDF files submitted by the sponsors.


11:30 Making unstructured data instantly available for E&P decisions
- Kim Gunn Maver, Executive Vice President, Business Development, Iraya Energies
The oil and gas industry are awash with data. 80% is unstructured data, which makes them difficult to utilize. Organizing the data using artificial intelligence makes them instantly accessible through a cloud native web interface.

Abstract: The oil and gas industry is among the most data intensive industries in the world, and while over the last decade the industry has become very effective and efficient in managing, storing, and sharing structured data, the industry has failed to find cost-effective ways to manage and use unstructured data predicted to be 80% of all data (reports, presentations, spreadsheets etc.) Further, according to GE research less than 1% of collected data are used by oil and gas companies in decision making. This data challenge can be managed by processing the fragmented and under-utilized data by utilizing advances in supervised/unsupervised Machine Learning and cloud computing to organize the unstructured data, extract new knowledge and integrate with existing workflows. Digitized unstructured data are ingested through a pipeline with workflows using machine learning techniques such as Natural Language Processing or Deep Convolutional Neural Network to provide a structured dataset by tagging texts and images. A workflow for automatically extracting information from the documents consists of a set of algorithms to identify segments within a document, after which, supervised machine learning is used to classify the document segments as either text or non-text. Optical Character Recognition is applied to the text segments to convert them into editable text, which is then further analyzed using Natural Language Processing and sentence analysis. In a separate data pipeline, the non-text components such as images and tables are tagged using Convolutional Neural Networks. The result of organizing the unstructured data is:

- Data management: The data are accessible through a data lake using a cloud native web-enable interface and any data (text and images) can instantly be identified, located and retrieved.
- Decision making: It is possible to retrieve relevant information when required and improve the decision making through intelligent full search capabilities of the text corpus and images with a link to the original data point and structuring of all images for better overview and comparison. It is estimated to be up to 40 times faster than manually reading and reviewing the data.
- New knowledge: Higher order analysis can be applied to the organized data. For example, a knowledge graph can make a more than 150 wells history from 50 years of exploration instantly available.


16:00 - 17:00 Panel Session - Education in Energy Data Management - Looking to the future
- Moderator: Dan Brown, CDA and Jane Hodson, Premier Oil


------------------ Friday November 13th. ------------------


11:00 The way forward for National Petroleum Data – An African Perspective
- Kwadwo Kyeremateng, Data Management Officer, Petroleum Commission Ghana
This is a presentation of the reflections and experiences of managing E&P data for an upstream oil regulator in Africa. It’s a step meant to encourage the sharing of experiences and valuable lessons for other data practitioners in the oil and gas industry.

Abstract: This presentation attempts to give a context to the challenges petroleum managers, especially those who manage National Data Repository in Africa face, and some solutions that is helping them overcome these challenges. Technical issues are isolated from non-technical ones. Critical data management issues such as data governance, service management and regulatory requirements along with operational challenges based on technical issues are brought forth for further deliberation and discussion.


16:00 Implement an End-to-End Upstream E & P Workflow Solution Using Machine Learning
- Sunil Garg, CEO, DataVedik
Data Science Life Cycle to build, operationalize and maintain End-to-End Upstream E & P Workflow Solutions Using Machine Learning.

Abstract: This presentation will focus on using Machine Learning to solve real Oil and Gas problems and convert these into end-user centric solutions. It will discuss the various aspects of building and deploying a successful ML based solution including Data Ingestion, Pre-processing, Data Lake, Machine Learning and operationalization with the help of E&P workflow examples. It will specifically cover the following phases of the Data Science lifecycle:
- Identify use case and pain points - Identify and collect the relevant data - Build/Train and test the Machine Learning Model - Validation of the model results by Domain Experts - Build solution and operationalize.

Subscribe to our Newsletter