The Digital Future of Pipeline Integrity Management


1
The Digital Future of Pipeline Integrity Management


Gaurav Singh


ROSEN Europe B.V., Oldenzaal, The Netherlands


1.1 Introduction


Over the last decade, integrity monitoring and assessment have gained in importance. Nevertheless, numerous high-profile safety incidents over the years have resulted in product releases and, in some cases, human casualties. These incidents have led to increased regulatory standards from governments and raised concerns among the public about pipeline integrity and the environmental impact of pipeline transportation. Most pipeline operators have integrity management plans set according to norms and obligations imposed by many governments, but the effectiveness of pursuing integrity management primarily depends on an organization’s budget [1].


The significance of pipeline integrity and structural health relies on numerous factors: data integration, threat identification, risk identification, and proactive decision-making to mitigate issues. These critical factors ensure a company’s optimum operations and successfully facilitate risk-based integrity assessment [1, 2]. Executing an integrated approach using existing integrity methods and digital monitoring technologies is challenging, which leads to a labor-intensive, lagging, and untargeted process for mitigation and maintenance activities [3]. There are two challenges in the integrated approach:



  1. Gathering data from operators and managing databases
  2. Standardizing data and building integrity risk models from different datasets introduces uncertainty into the prediction

The processes are followed by mapping, integrating, and managing the data. The process slows until the data is received from the operators or surveyors; time is lost until the identified risk is reported.


Pipeline integrity has, in general, begun a digital transformation journey. New technologies and solutions are available to help pipeline operators achieve better and more feasible integrity solutions [2, 4], leading to safe and efficient operations using the uniquely designed framework approach discussed hereunder.


1.2 Digital Integrity Framework


A digital integrity framework consists of two frameworks that are generic enough and can be applied to any asset type. The first framework is a Pipeline Integrity Framework (PIF), which can be applied to any operating asset in order to find its current condition and how it can be improved. The second framework is a technological framework, which in conjunction with the first framework helps implement the digitalization processes in managing the assets. The purpose of this book is to focus on pipelines as an asset. Let us understand at both the frameworks a bit more in detail.


1.2.1 General Pipeline Integrity Framework


A PIF is not something new in the industry; it has evolved from a simple plan-do-check-act exercise to a more detailed look at the identification and implementation of processes that are future-ready and can be easily integrated with evolving data and digital technologies. Figure 1.1 shows one such PIF, which can be applied to any asset with the end goal of ensuring safe pipeline operation.


The framework approach [1] creates an opportunity for users to combine the competencies from different domains, such as inspection technologies (left side), digitalization and data management, and integrity expertise (right side), to mention just a few. Following the framework approach, one must also make sure that legacy data (records) is digitally stored in a system of record, which acts as a single body of “truth data” for integrity engineers to carry out the assessments. That is why data management is at the core of this framework, which helps in making compliance easier by ensuring that all inspection and integrity records are traceable, verifiable, and complete. It demonstrates that integrity assessments (e.g., risk models) comply with national and international standards; present a complete history of inspections, including Non-Destructive Evaluation (NDE) and repairs; and retrace the integrity assessment process and factors considered.

A flowchart of a pipeline integrity management process. It starts with Data gathering leading to system selection, pipeline preparation, and pipeline inspection. It branches into preliminary screening, then final reporting, and then update risk assessment. Training, threat analysis, data management, immediate and future fitness assessment, targeted field verification, pipeline rehabilitation, and more are integrated. Arrows indicate connections.

Figure 1.1 Pipeline integrity framework (PIF).


(Adapted from Ref. [1].)

A diagram features a central cloud icon surrounded by arrows pointing to four elements: Process template base library. Files. Dashboards. System of record. Each element is represented by its icons such as clouds, documents, graphs, and a database.

Figure 1.2 Digital framework for PIMS.


1.2.2 Digital Framework for Pipeline Integrity Management Systems


Digitalization takes place when we break away from old practices—the legacy way of storing data in flat files, distributed systems, file folders on network drives, or in hard copies—and adapt to evolving digital technologies to collect, store, and analyze data that is accessible to all decision-makers in the organization. After all, the data is the property of the originator and not of a specific user group (e.g., inspection or integrity) within the organization. Keeping this as one of our focuses, we designed a digital framework approach for managing not just pipeline datasets but also datasets for other assets, such as tanks, distribution lines, and piping. The system of record where all the data sits is at its core. Our framework consists of two components, as shown in Figure 1.2: data management and integrity management.


1.2.3 Data Management


Data has become an intangible asset for many domains, and the pipeline integrity domain is not far from it. In fact, the pipeline integrity industry is one of the industries where huge amounts of data are generated and gathered throughout the life of the asset, right from the design stage to its operations and decommissioning stage. An example of the variety of integrity and other datasets gathered can be seen in Figure 1.3 which shows the variety of datasets one needs to conduct integrity assessments.

Two circular diagrams. On the left is a circular chart with integrity data at the center, surrounded by rings labeled In-line plus verification, survey, and other assessment techniques. To the right is a cluster of circles around data gathering, with circles labeled images, drone, weather, elevation, and more.

Figure 1.3 Integrity data (left); additional datasets (right).


In the language of data engineers, we can further categorize this as “data in REST” and “data in MOTION.” Data in REST are the data we collect by running In-Line Inspections (ILIs) or Non-Destructive Tests (NDTs), or even data that surround the assets. This dataset becomes snapshot data for the pipelines and provides the condition of the pipeline at that moment.


Data in MOTION are the data we collect from sensors every second; this could be data related to pressure and temperature coming from the SCADA system, potential data from cathodic protection systems, ground movement data from inclinometers, leak detection data from live sensors, or even live audio-visual feeds from an incident site. All of this data can be stored in a database management system or in memory that can be accessed via cloud platforms/solutions or locally. Furthermore, one can classify data differently depending on the applications consuming it.


With such a varied datasets, it is imperative to store it in a standard data mode, such as one proposed by Pipeline Open Data Standards (PODS) or Esri’s proprietary Utility and Pipeline Data Model (UPDM).


Because ILI inspections are carried out by different service providers, it is important for an operator to linearly and spatially align all the inspections and survey data such that every dataset in the system is represented by the same distance measurement and each asset or joint has a coordinate attached to it. Providing a well-structured workflow supports data analysts with integrating, managing, and maintaining large amounts of location data and other asset-related data originating from different systems and in different formats. Data management is seamlessly integrated into GIS, providing the full power of ArcGIS Pipeline Referencing (APR), thereby enabling operators to manage route and event data effectively on multiple linear referencing systems.


Data alignment is the final step for survey and inspection data. Visual and tabular views, together with real-time quality indicators (Figure 1.4), allow the user to monitor alignment progress and accuracy. Visual components involved in the alignment can be controlled by the user and stored as templates.


In any integrity analysis, the alignment of the data, both spatially and linearly (for pipelines), is key to generate value from the data especially when used in Risk assessment [5].


Collecting, storing, and synchronizing datasets provides a wealth of information for engineers and decision-makers in the form of Key Performance Indicators (KPIs), as shown in Figure 1.5. These KPIs help the decision-makers to have a holistic view of the health of the assets and allocate investments where it is needed the most.


Organizing the data in such a way is a first step toward the adoption of digital technologies or digitalization which would further accelerate the development of digital twin models. These activities are tracked in the system, offering a fully auditable data management platform. The process becomes even more powerful when leveraging cloud-based data storage and processing, making data, processes, assessment results, and visualizations available at our fingertips anytime and anywhere. Also, the data management platform should be data and vendor agnostics so that data-driven solution can be deployed and full value potential can be exploited.

A screenshot of data visualization software showing tabs like file and edit. Features include a scatter plot, a demographic data table with names and IDs, a line graph, and coded data. Analysis tools on the right display correlation graphs and quality analysis metrics with pie charts.

Figure 1.4 ILI data alignment and monitoring alignment quality.

A dashboard shows a map with pins for repair sites, bar charts for repairs completed and planned by year, a repair coverage pie chart, a status doughnut chart, and data tables. Metrics: 880.20 repair coverage, 15.26 million total repair cost, 32 total repairs.

Figure 1.5 Operations management dashboards with KPIs.


1.2.4 Integrity Management


Integrity management is a continuous assessment process that is applied to an operating asset in order to avoid failure. For the operator, its purpose is to achieve:



  • Safe operation of assets following national and international standards
  • Operation at optimized performance to yield maximum return on investment
  • Operation as long as possible beyond the design life of the asset

Typically, the assessments are based on codes and standards, such as ASME, NACE, BS, ISO, and the like. Reading these standards, using them for assessments, and later deducing information for decision-making need substantial years of experience in the industry, which is not an easy task. Therefore, the integrity management component of our framework allows for the configuration of autonomous definitions as well as the modification of integrity management processes and integrated algorithms as needed. Not restricted to a set of predefined functionalities, it presents the user with unlimited options in terms of adaptation to individual requirements, as new integrity processes can be created and existing ones altered at any point in time, thereby bringing flexibility when there is an update to the codes and standards mentioned previously.


The process-based logic of the IM component follows the idea that each integrity assessment can be represented as a logical step-by-step process, whether the process is a relatively straightforward defect assessment calculation or a highly complex quantitative risk assessment. The process-based design ensures that the integrity engineer follows well-defined steps to generate meaningful results. The process-centric design guides the engineer (new or old) from top to bottom through the defined process steps of the assessment in a digitally savvy way.


Once the assessments are carried out, the results can be visualized through the free definition of scenario-centric dashboards, which are combined visualizations consisting of charts, tables, maps, band views, and risk matrices, all interconnected with each other and configurable as per the user’s requirements. One such example of a dashboard is provided in Figure 1.6.


To make smarter, more efficient decisions, we need three ingredients: data, processes, and people.


According to ISO 55000, achieving excellence in Asset Integrity Management (AIM) requires clearly defined objectives, transparent and consistent decision-making, and a long-term strategic view. Specifically, it recommends well-defined policies and processes to bring about performance and cost improvements, improved risk management, business growth, and enhanced stakeholder confidence through regulatory compliance.


In reality, processes are interpreted differently by different pipeline operators, and the workflows that make up these processes are often defined by individual engineers and subject matter experts (SMEs). Indeed, although different operators will all have the same fundamental objectives (e.g., zero failures, improved performance, and reduced costs), the processes they adopt to achieve those objectives will vary. For example, data management within individual companies will vary depending on the size of and skill sets within the company.


Two challenges in developing a successful digital PIMS framework can be foreseen. First, just because integrity processes may well have been developed by SMEs, they may not be fully transparent throughout the company. Second, the datasets necessary to allow integrity assessments may not all be available at the same or even at the correct location, and they may not be correctly aligned. This can result in quite different integrity assessment results; it is dependent on the engineers conducting the assessments and the datasets utilized.

A detailed data analysis dashboard displays multiple interactive graphs and a data table. The top left section shows a table with columns like chromosome and start position. Scatter and line charts depicting distribution and variation, with axes labeled numerically are shown below.

Figure 1.6 Integrity management dashboard (immediate FFS).


So, the digital PIMS platform should, ideally, seek to address both challenges by providing a framework-based approach where traceable, verifiable, and complete records are stored and aligned, together with the provision of standard and tailored integrity processes, created with the SMEs’ knowledge, that can be utilized throughout the organization. Therefore, consistent asset integrity assessments will be the norm throughout the company.


1.3 Fast Forward Digital Future Technologies


1.3.1 Integrity Data Warehouse


To develop a digital PIF adhering to industry standards, the primary step is to have a data management plan. Figure 1.7 shows a data warehouse, which stores anonymized pipeline data consisting of ILI runs, design and construction data of the pipelines, operational parameters, and above-ground survey information from over 145,000 km of pipelines and counting. This accounts for 15,000 pipelines across the globe with internal and external defects, cumulatively comprising more than 11 million defects [1].


This number includes 100s of pipelines subject to Electro-Magnetic Acoustic Transducer (EMAT) and/or ultrasonic (UT) crack detection inspections, in addition to several thousand metal loss inspections (MFL, UT). This unique database provides information on where cracks or metal loss of different types can be found, as well as detailed knowledge of the variables that should be considered within data analytics studies (whether from ILI data, above-ground survey data, environmental data, and operational data).


Simultaneously, analytical tools and techniques are developed and successfully implemented to handle big data. The data and tools efficiently improve pipeline integrity management by adhering to industry standards and providing decision-making solutions for the entire pipeline integrity lifecycle, right from data management to inspection selection to integrity management.


The next stage in the application of data analytics is to understand what will happen in the future meaning predictive analytics, this is the stage where we take the trends we have seen in the descriptive analytics stage and use them to create relevant predictive models to predict what will happen in the future. In addition, by using these two stages we can—either independently as separate tools or as a combined tool (which is known as prescriptive analytics)—come to an informed decision about what we should do in the future.


Applying sophisticated algorithms using machine learning techniques means unearthing new opportunities to find indicators that threaten the integrity of the pipelines and were previously unknown to the SME. Such techniques help operators move from time-based maintenance to more proactive, predictive pipeline maintenance, thereby also allowing substantial OPEX savings and planning for future CAPEX allocation.


1.3.2 Descriptive Analytics: What Has Happened?


One of the simplest examples of descriptive analytics is benchmarking of pipelines. Figure 1.8 shows about 5000 pipelines plotted anonymously based on their external corrosion condition. The y-axis shows a number for anomaly density (as reported by ILI), and on the x-axis, we have the maximum depth. This plot provides us with a representation of prevalence (how widespread external corrosion is) and severity.

A diagram shows five categories with icons. In-line inspection with a wave icon. Design and construction with a wrench icon. Environment with a globe icon. Operations with a gauge icon. Surveys with a triangle with an exclamation mark. On the left is a graphic of a cube structure.

Figure 1.7 Integrated integrity data warehouse (IDW).

A scatter plot with maximum depth in weight percentage on the x-axis and anomaly density per square metre on the y-axis, both on logarithmic scales. Numerous semi-transparent dots are densely clustered across the range, with larger, darker dots near 10 and 60 percent on the x-axis.

Figure 1.8 Benchmarking of pipelines using IDW.


(Adapted from Refs. [6, 7].)


We have highlighted one particular network of pipelines where one can quickly identify which pipelines are good (left of plot—relatively low feature count/exceedance), bad (right of plot—relatively high feature count/exceedance), or average (somewhere in the middle) [8]. With appropriate metrics, the same technique can be used for any other measurable pipeline threat, including cracks, dents, and bending strain.


1.3.3 Predictive Analytics: What Will Happen?


The most obvious application of predictive analytics is gaining an understanding of the condition of a pipeline that cannot be inspected using ILI technology—which is a reality for half of the world’s pipelines. The goal is to predict the present and the future state of the pipelines.


Historically, these pipelines have been managed with direct assessment techniques involving traditional modeling or susceptibility analyses, followed by direct examination in the field. Though effective at times, this is a costly process with no guarantee of success, especially if subsea pipelines are considered. We therefore tend to know relatively little about the true condition of uninspected pipelines, particularly when they are at the bottom of the ocean or buried underground.


This is an example of how data analytics can bring real value—by learning from the condition of similar pipelines that have been inspected in the past, we can begin to understand the different variables such as coating type, pipe grade, CP potential, or soil properties that predict pipeline threats and develop models to predict the condition of uninspected pipelines at joint level.


Predictive analytics describes the creation of a predictive model, which will be of the form: [812]


(1.1)italic y equals italic left-parenthesis x Subscript 1 Baseline comma italic x Subscript 2 Baseline comma ellipsis italic x Subscript italic n Baseline right-parenthesis

where 𝑦 is the “target variable” and {𝑥𝑖} are the relevant “predictor variables,” which could be the properties of the pipe joint, environment, etc. For the purpose of locating and identifying a specific threat (corrosion or crack defects), the target variable will be anomaly classification. This target variable will be estimated for all reported crack-like anomalies in the target pipeline. The predictive model will be created using supervised machine learning techniques. In a supervised machine learning model, the relationship between 𝑦 and {𝑥𝑖} is learned using historical observations of the variables. This is recommended in cases where there is abundant “ground truth,” such as field verification results, with which to train the model. Supervised machine learning techniques include logistics regression, support vector machines, and decision trees.


1.3.4 Use Case: Virtual ILI


ILI can reliably detect a number of pipeline threats, but many pipelines across the world cannot be internally inspected due to constraints such as location, flow rate, tight bends, valves, tees, or any of the other features that traditionally create obstacles for an ILI tool. So, can a virtual inspection give us reliable information on what to expect from a real inspection by using data from inspected pipelines?


Collecting data over the years and creating a data warehouse provides us the opportunity to look back into data and find various patterns (variables) that are responsible for features or defects in the pipeline. This is helpful in predicting the condition of a pipeline that does not have a provision to run an ILI tool (Figure 1.9). Results can be achieved by utilizing the latest digital technologies, including machine learning and data analytics.

An illustration shows a single, uninspected pipeline on the left with a question mark labeled uninspected pipeline. An arrow labeled supervised machine learning points to the right, where multiple inspected pipelines are shown in a bundle labeled inspected pipelines.

Figure 1.9 Overview of virtual ILI.


(Adapted from Refs. [9, 12].)


Virtual ILI [9, 12] can be applied to any threats that can be detected via an ILI, such as predicting corrosion (internal or external). For external corrosion, for example, one would look at the density of the anomalies in old pipelines and its interactions with the CP system; in internal corrosion, the prediction could be based on various other variables, such as elevation, product type, pressure cycle information, soil condition, coating type, etc. One should be aware of the boundary conditions when using such machine learning techniques as they heavily depend on the availability of the actual parametric data like soil pH, soil type, coating condition, etc., which are required for the training of the machine learning models. Only good-quality data would generate conclusive results.


Research [810, 12] suggests that machine learning models applied to external-corrosion condition metrics indicate a promising performance for virtual ILI. The approach is expected to be valuable for a variety of integrity management applications, thereby helping integrity engineers and pipeline managers needing timely and actionable information/reports to avoid potential problems.


1.3.5 Space-Based Digital Asset Monitoring (Earth Observation)


Remote sensing techniques make it possible to collect data from dangerous or inaccessible areas, with growing relevance in modern society. They replace slower, more costly data collection on the ground, providing fast and repetitive coverage of vast regions for routine applications ranging from weather forecasts to reports on natural disasters and climate change [13]. Remote sensing is also an unobstructed method, allowing users to collect data and perform data processing and geospatial analysis offsite without disturbing the target area or object. Monitoring of floods, forest fires, deforestation, vegetation health, chemical concentrations, infrastructure health, and earthquakes are just a few subjects in which geospatial remote sensing provides a global perspective and actionable insights that would otherwise be impossible.


The data collection method typically involves aircraft-based and satellite-based sensor technologies, classified as either passive or active sensors. Responding to external stimuli, passive sensors gather radiation reflected or emitted by an object or the surrounding space. The most common source of radiation measured by passive remote sensors is reflected sunlight. Other common examples of passive remote sensors include charge-coupled devices, film photography, radiometers, and infrared.


Active sensors use an internal energy source to collect data, emitting energy to scan objects and areas, after which a sensor measures the energy reflected from the target. RADAR and LiDAR are popular examples of active remote sensing instruments that measure an object’s distance before returning to establish an object’s location, direction, and speed [13, 14]. The data gathered are then processed and analyzed with remote sensing hardware and computer software (e.g., energy analytics and business intelligence) available in various proprietary and open-source applications. Simply put, the data are acquired and processed digitally. Therefore, the term used is digital asset monitoring [3, 4].


Typical digital asset monitoring involves data acquisition to report digitally, i.e., without conducting field surveys. The monitoring process is conducted in different sequential stages as depicted in Figure 1.10 and explained hereunder.


1.3.5.1 Identification of Assets and Problems


The pipeline corridor is identified as an asset. The corridor’s location on a global scale is essential, as problems vary regionally. Primarily, geophysical factors are considered. For example, active earthquake and flood-prone areas significantly affect the pipeline asset on a global scale, boosting stress and external corrosion, respectively. Configured next is the intensity of the problems and which data will solve the problems.

A diagram illustrates a remote sensing-based approach. It consists of steps: Real-world with earth image. Remote sensor with satellite. Image data from sensor. Geoinformation extraction creates a 3-D cloud, map, and orthophoto. Cloud-spatial database for data processing.

Figure 1.10 Stages of digital data monitoring.


1.3.5.2 Data Acquisition


Satellite images from different sensors (wavelength) or existing aerial imagery archives are acquired. Satellite images are acquired regularly within precise time intervals. Consistency in obtaining data from satellites is essential for the accurate monitoring of assets.


1.3.5.3 Data Processing and Analytics


Increased processing power combined with more sophisticated algorithms create new opportunities for data analytics—conceivably delivering insights into previously unidentified threats to pipeline integrity. The data acquired is in the form of images. Image data processing is performed to detect changes in spatial and temporal dimensions. The advanced Computer Vision (CV) algorithm can detect minute changes in the pipeline corridor, scan each image pixel, and quantify the geometric variance of the assets and surroundings. In the next section, digital asset monitoring using RADAR technology is detailed.


1.3.6 Radar


Satellite radar remote sensing technology is difficult to compare because lower-cost electronics are just beginning to make Synthetic Aperture Radar (SAR) technology economical for the monitoring of assets. The capabilities of SAR can overcome the limitations of satellite data, such as obstruction due to cloud and atmospheric interference [3, 15] as seen in Figure 1.12a. SAR systems take advantage of the long-range propagation attributes of radar signals and the complex information processing capability of modern digital electronics to provide high-resolution imagery, as shown in Figure 1.11.

A diagram of a synthetic aperture radar system. Two radar sensors are aligned along the z-axis, labeled azimuth. Each sensor projects onto the ground at an angle labeled theta a, covering an area noted as swath width along the x-y plane.

Figure 1.11 Basic principle of synthetic aperture radar, data acquisition along the flight (azimuth), and swath-width coverage on the ground.


(Reference [16]/RCraig09/CC BY-SA 4.0.)


SAR complements other optical imaging capabilities; it can accurately capture data at night and distinguish a target’s reflections of radar frequencies (Figure 1.12b). From a monitoring perspective, SAR technology provides structural terrain information to geologists for mineral exploration, oil spill boundaries on the water to environmentalists, sea state and ice hazard maps to navigators, and surveillance and targeting information [17, 18]. There are many other applications for this technology. Some of these, mainly civilian ones like environmental monitoring, land-use and land-cover mapping, and civil infrastructure monitoring, require extensive coverage area imaging at high resolutions [19]. SAR technology has not yet been adequately explored in the pipeline industry and is just beginning to become economical for downstream and upstream pipeline monitoring.

An aerial image shows a landscape with fields, roads, and a body of water. The scene is partially covered by scattered clouds, with shadows visible on the ground. A prominent river or canal runs through the center, intersected by a large bridge or road.
Image depicting irregular, fragmented shapes on a textured background. The left side shows a large, sharp and rough shape resembling a profile with a long protrusion, while smaller, scattered shapes are visible across the image. Thin lines connect some of the formations.

Figure 1.12 Satellite data from (a) Multispectral sensor has limitations to penetrate the cloud cover; (b) Radar amplitude data are able to penetrate the cloud cover.


1.3.7 SAR Time Series


An advanced time series technique using Interferometric SAR (InSAR) data can detect structural health in real time. A technique that permits the detection of infrastructure deformation laid on or near the earth’s surface, InSAR has been used extensively to measure displacements associated with earthquakes, sub-surface movement, and many other crustal deformation phenomena. An analysis of a time series of SAR images extends the area where interferometry can be successfully applied and allows the detection of smaller displacements, ranging from small time frames (for weeks) to long time frames (for years) [13].


InSAR can complement and even transcend ground-based measurements, which sometimes tend to under-sample the displacement field in spatial (GPS antennas are available only at specific points) or temporal (e.g., low frequency of leveling measurements) domains (Figure 1.13). The formidable advantage of space-borne InSAR is the ability to monitor pipelines nearly in real time.


In addition, integrating InSAR results with ILI data can do wonders. It will strengthen the historical data for pipeline movements (drifts) and provide a concrete integrity assessment methodology to monitor unpiggable pipelines. Another significant advantage of integrating InSAR with IDW is that, a mitigation analysis model can be developed to predict the theft or rupture of a pipeline, i.e., where and when. These are some of the many questions operators ask while managing the pipeline integrity and, more specifically, during the times of energy transition when a new fuel (like H2) is to be transported using the existing pipelines.

An illustration of a satellite interferometry. It shows a satellite passing twice over a terrain, capturing phase data. The diagrams depict line-of-sight, phase shifts, and amplitude graphs. Arrows indicate flight direction, and the phase difference is explained with a radar wavelength representation.

Figure 1.13 Interferometry SAR (InSAR) technique; measurements are acquired from different viewing angles at different times.


(Reference [20]/Commonwealth of Australia/CC BY 4.0.)


1.4 Technology Transition with Energy Transition


Embracing the advancements of emerging technologies can pave the way for a new digital business model focused on pipeline integrity. This model will bring forth fresh opportunities and lead us toward the complete digitalization of integrity practices. These technological advancements are rapidly evolving, enhancing operational efficiency and safety, while also necessitating the need to stay up-to-date with the latest trends. To maximize the benefits of this transition, organizations must update their skill sets, adopt a flexible approach, and effectively adapt to the transfer of technology. In addition, the structure and functioning of organizations will be significantly influenced by these changes. This is the essence of digital transformation, which will reshape work patterns in the foreseeable future.


Further adoption of a framework-based analytics approach is revolutionizing how service providers engage with customers. By leveraging cloud-based data, the power of computation is harnessed to perform intricate algorithms, enabling professionals in the industry to access the results anytime and anywhere worldwide. This connectivity and data-driven approach unlock the full potential of professionals in the field.


References



  1. 1 van Elteren, R., Diggory, I., Spalink, J., and Singh, G. (2020) Pipeline integrity framework; ‘Mind the gap! Proc. 15th Pipeline Technology Conference, online 30 March–2 April, 2020.
  2. 2 Charlton, A.A. and Curson, N. (2020) Applying an Industry 4.0 philosophy to pipeline integrity, the future beyond digitalization. Proc. 15th Pipeline Technology Conference, online 30 March–2 April 2020.
  3. 3 Gajjar, Y. (2017) Monitoring of pipeline RoU using remote sensing and GIS techniques. Proc. ASME India Oil and Gas Pipeline Conference, Mumbai. https://doi.org/10.1115/IOGPC2017-2428.
  4. 4 Barth, M. (2020) Digitalization Projects for the Oil and Gas Industry. Proc. 15th Pipeline Technology Conference, online 30 March–2 April, 2020.
  5. 5 Boettcher, A. and Chambless, K. (2018) The future of pipeline integrity, printed in world pipelines.
  6. 6 Palmer Jones, R.; Smith, MS.; Capewell, M.; Pesinis, K. and Santana, E. (2019) The good, the bad, and the ugly—categorizing pipelines using big data techniques. Pipeline Pigging & Integrity Management (PPIM) Conference, February 2019, Houston, Texas, United States of America.
  7. 7 Palmer Jones, R. (2020) Managing pipeline threats—the way forward part 3: information to decision. https://www.rosen-group.com/global/newsletter/Edition-12/Managing-pipeline-threats-3.html (accessed 12 June 2023).
  8. 8 Capewell, M., Pesinis, K., and Smith, M. (2021) Data analytics for tracking the deterioration of pipelines. Proc. 16th Pipeline Technology Conference, March 15–18, 2021.
  9. 9 Taylor, K. et al. (2022) Predicting the likelihood of external interference damage using machine learning algorithms trained on in-line inspection data. Proc. Pipeline Technology Conference, March 7–10, 2022, Berlin. ISSN 2510-6716.
  10. 10 Smith, M. et al. (2021) Deep learning for high-resolution external corrosion prediction in uninspected pipelines. PPIM, February 24–25, 2021.
  11. 11 Sandana, D. et al. (2022) Employing machine learning for stress corrosion cracking management in the pipeline de l’Île de France. Proc. Technology for Future and Ageing Pipelines, March 29–31, 2022, Gent, Belgium.
  12. 12 Capewell, M. et al. (2022) Virtual ILI—predicting the results of an in-line inspection. Proc. PPIM, 31st January–4th February.
  13. 13 Chang, L., Sakpal, N.P., Elberink, S.O., and Wang, H. (2020) Railway infrastructure classification and instability identification using sentinel-1 SAR and laser scanning data. Sensors (Switzerland), 20(24), 1–16. https://doi.org/10.3390/S20247108.
  14. 14 Chang, L., Dollevoet, R., and Hanssen, R. (2018) Monitoring line-infrastructure with multisensor SAR interferometry: products and performance assessment metrics. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(5), 1593–1605. https://doi.org/10.1109/JSTARS.2018.2803074 Accessed: Sept. 4, 2022. [Online].
  15. 15 Singhroy, V., Fobert, MA., Li, J., Blais-Stevens, A., Charbonneau, F., and Das, M. (2021) Advanced radar images for monitoring transportation, energy, mining and coastal infrastructure. In: Singhroy, V. (editor), Advances in Remote Sensing for Infrastructure Monitoring. Springer Remote Sensing/Photogrammetry, Springer, Cham. https://doi.org/10.1007/978-3-030-59109-0_1.
  16. 16 Synthetic-aperture Radar. https://en.wikipedia.org/wiki/Synthetic-aperture_radar (last accessed June 12, 2023).
  17. 17 Hall, M. (2022) A Seismic Shift In Satellite Imagery. World Pipelines Coating and Corrosion 2022.
  18. 18 Arya, A. (2023) Looking Above. World Pipelines Vol 23 No. 4 2023.
  19. 19 Hole, J., Holley, R., Giunta, G., De Lorenzo, G., and Thomas, A. (2012) InSAR assessment of pipeline stability using compact active transponders. Proc. Fringe 2011 Workshop, Frascati, Italy September 19–23, 2011 European Space Agency SP-696. Noordwijk, The Netherlands: ESA Communications.
  20. 20 Source: Interferometric Synthetic Aperture Radar. https://www.ga.gov.au/scientific-topics/positioning-navigation/geodesy/geodetic-techniques/interferometric-synthetic-aperture-radar, by Geoscience Australia which is © Commonwealth of Australia and is provided under a Creative Commons Attribution 4.0 International Licence and is subject to the disclaimer of warranties in section 5 of that licence. Creative commons BY icon features a circular logo with CC on the left and a person symbol inside a circle on the right, indicating attribution is required. (last accessed June 7, 2023).

May 10, 2025 | Posted by in General Engineer | Comments Off on The Digital Future of Pipeline Integrity Management
Premium Wordpress Themes by UFO Themes