Intelligent Rail Summit – Day Three

Its very name means the huge potential of ‘Big Data’ – the term we’ve come to know for data which is so large and difficult to process using traditional techniques – was the perfect platform for underpinning the key messages on the third and final day of the Intelligent Rail Summit 2016.

In his opening presentation, Gerard Kres of Siemens introduced how the company’s application of The Internet of Trains took the concept of Big Data analytics in rail, and examined how data can really be used to create real value for customers.

Breadth of data

The challenge, he said, was to ‘turn data information and drive appropriate actions’.

“Our customers want good products,” said Kres.”You have to have very different underlying IT structures. For me, ‘Big Data’ is not just about number of entries in a database, but it is about the breadth of the data.”

Kres gave some idea of the sheer amounts of raw data being managed, saying that rail vehicles today send between one and four billion data points per year, while at the same the rail infrastructure can send billions of internal messages. On top of this is data relating to work orders, spare parts, geography and weather.

“Every year we are getting 50-100 turbo bytes of data. You have to take the process, the algorithms and the software, and integrate it into our customers’ needs.

Kres also explained how, through Siemens Railigent platform, personnel is also a key driver to enabling efficient data interpretation. In order to implement the necessary digital services, he added, Siemens had built a large team of data scientists.

“We have a very international team working on this, which reflects the science at work. Within modern IT, you don’t find experts just around the corner but we have cutting edge experts to help implement these projects.”

He cited several case studies showing how tangible business value can be achieved. For example, component monitoring on the Thameslink Class 700 in the UK, especially for doors, has been shown to reduce delays and increase availability.

“Siemens has a unique differentiator,” he said “To ensure customer value creation, insight generation needs to combine data science with domain expertise – the more data it can give me the better.”

Diego Galar of Lulea University was equally keen to emphasise the scale of data being extrapolated and interpreted: “You can’t imagine the scale of the data we are going to be producing. All of this data goes somewhere and we want to get some benefits from it.”

His presentation, Railway Assets: a potential domain for big data analytics, began by posing the questions: Why Big Data in transportation? & How Did We Get Here?

He gave some scale to the size of the challenges faced, describing how ‘industrial and transportation data is a late but powerful entry’ to the data sharing world populated by the likes of social media. “Data generated by social media is nothing compared to what we are producing in railways, but there is a substantial difference – this is machine-generated data that we are dealing with. Information from humans is a tiny piece of information, and that is why we have to take this as a challenge,” added Galar. “An increasingly sensor-enabled and instrumented business environment generates huge volumes of data with machine-speed characteristics.”

Track-side condition monitoring

He cited examples of data collection in action such as track-side condition indicators providing offline information, and on-board indicators providing ‘real time’ online data on issues like load, vibration and temperature. Smart bearings, he said, were a good example of a sensor for condition monitoring.

“You can check the condition of the train, but also the track,” he said. “There is a double benefit, for the infrastructure manager and the rolling stock manager. We need this sharing of information. We have to fuse both information sources, because together they will deliver a much more valuable service.” He also described ‘prescriptive maintenance’ as ‘one step ahead of prediction’.

“Transportation data is becoming one of the largest domain for big data analytics and target of data science. Massive observations of certain processes do not assure the quality of deliveries. Lack of definition in the services expected by customers. We have the ingredients, but the customer has a huge lack of definition in the services. We don’t know what we can really get from it. Prognosis and diagnosis is vital.”

With his presentation, Enhanced points condition monitoring on Network Rail infrastructure, the University of Birmingham’s Louis Saade gave delegates an insight into his work with the UK’s rail network manager. Network Rail, he said, was moving towards an ‘intelligent asset’ management strategy, a key part of which is the ‘intelligent infrastructure programme’, using strategic remote condition monitoring (RCM) of assets to make a major contribution towards improving overall network performance in terms of reliability, safety and efficiency.

RCM, added Saade, was driving maintenance of fixed assets towards a ‘predict and prevent’ model, by detecting early deterioration and therefore reducing the potential for affecting services. The intelligent infrastructure currently monitors more than 42,000 critical assets including points, track circuits, points heaters, rail temperature and power supplies.

Delay minutes

“One of the most common causes of delay minutes attributed to Network Rail are points failures,” he said. “PCM has already been implemented, which triggers an alert or alarm if one of these monitored values exceeds a predetermined threshold: swing time, average current, peak current.”

A key part of this, he added, was equipping personnel with the necessary tools: “Engineers on the tracks can set relevant thresholds, perhaps taking into accounts certain elements, such as the seasons. We want to the system to provide the engineers with confidence, to give them something they can rely on, and something they can put their trust in.”

The main requirements of an Enhanced Points Condition Monitoring (ePCM) model are a system or collection of systems that can detect the majority of point machine faults; a system that works with the majority of machines; and a system that processes in a way understandable to engineers with low amounts of false alarms to build.

“We don’t want the knowledge to disappear,” said Saade. “We want to provide outcomes that people can understand and see in the outputs. The idea is also to have fewer false alarms by adopting these rules.”

One of the overriding messages of the MERMEC Group’s Pietro Pace was that railways have yet to fully realise the benefits of making best use of operational information collected, much of which is still ending  up in so-called ‘data graveyards’.

Condition monitoring – essentially the process of determining the condition of infrastructure and rolling stock – can, he asserts, provide crucial information that enables operators to optimise their management and maintenance of their assets. Historically, data has been collected, checked against the appropriate safety parameters and that has been it. It is only used once, and thereafter left in the ‘data graveyard’, never to be seen again.

Safety-critical defects

That, says Pace, is where condition monitoring can make a critical difference. Rather than just viewing data sets in isolation, condition monitoring measures against several parameters to identify any significant changes in assets or components that may, left unchecked, severely affect operations. Operators can take data from multiple conditioning devices and, through rigorous analysis, identify any safety-critical defects and recommend appropriate maintenance and renewal works.

Ultimately, he adds, it is all about enabling the railway sector to do things better and not create unnecessary additional work: “Any hardware device or data analytics software that is introduced in a railway organisation has to be considered not as a replacement of the human job but as a tool to allow operators to work smarter, and not harder.”

High quality asset data at your finger tips – that was the focus of Raymond Soeters of The Netherlands-based ProRail, who said their ambition, through the SpoorData (rail data) programme was to achieve ‘safe rail’ (zero avoidable accidents); ‘reliable rail’ (zero avoidable disruptions); punctual rail (further increases in punctuality) & ‘sustainable rail’ (less energy consumption & highest rung on the CO2 performance ladder).

“How are our ambitions related to the programme of  SpoorData?” he asked delegates. “I think they are absolutely intertwined and can make a difference, and we do that by being smarter. We need to understand the key factors that influence our performance, and this is why we started SpoorData. Two years ago we first had to face the problems that made our data not good enough.”

Standardised error codes

Soeter highlighted the distinction between Configuration Data: Where is it (geographical data), which function does it perform (functional data) & What is it? (physical data), versus Steering Data: What is the planned and realised maintenance (maintenance data), what is its condition (conditional data), & how does it perform (performance data).

He outlined what he saw as the top five key considerations for ‘keeping data structurally reliable’:

5 – ‘Crystal clear and lean information specifications’: “What do you need to do your job as quickly as possible?” said Soeter. “We challenge the data over and over again. But we were not finished. Next, we went to the contractors and explained the need for us to have the information. We agreed on the standardisation on error codes, on the maintenance codes, and on how conditional data is needed, for example on a switch.”

4 – One information model for the entire sector

3 – Digital verification: Outside = Inside. “If it is not detected, it is stored as a mismatch,” he said. “So for all those mismatches, we are solving problems by changing designs, and changing it outside.”

2: ‘Use the wisdom of the crowd’: “This is what we use from other organisations in The Netherlands. Present it so that it is visual to the outside world.”

1: ‘Together, with our partners’: “This is something we are very proud of,” he said. “We are completely independent from our partners. We have to find solutions. Years ago we are always fighting with our contractors. Now, you might not agree on anything but it is about working together.”

Steven Hagner of ABB Enterprise Software was equally keen to emphasise the ‘proactive, not reactive’ approach to managing the performance of rail assets: “We have a lot of coverage worldwide and it has given us this unique position – manufacturing on one side and software on the other. We bring a different way of seeing the rail world.”

In developing the types of software needed in the industry, one of the key concerns, he felt, was how the workplace had changed: “One of the bigger concerns is not so much people retiring, but it is the people coming in. They are never going to be doing the same job for 40 years, it just doesn’t happen any more. The dynamic has changed, and people are expecting different tools to do the job.”

Missed maintenance

The ‘evolution in the analytics world’ meant today’s personnel needed the support to make the crucial decisions.

“Give us some recommendations about what we can do,” he asked delegates. “What about missed maintenance? Nobody likes to talk about how maintenance schedules sometimes gets missed. It should be impacting the scores coming out of the systems.

“In the electronic transmission world, we are seeing a change. We are willing to put it in the cloud to leverage the benefits of a cloud platform. We are trying to keep the user interfaces as simple as possible. Personnel are now used to applications which work without being overly complicated. ”

It was vital, he said, to combine ‘intelligent rail’ with inspections and maintenance history; consolidate asset types in one system; and act on the results, by document issues, generating work orders, and prioritising maintenance and replacement.

“It takes time to get to the bigger savings, but when you get people to trust the information, then the results come out,” he added.

Georg Neuper of Austrian Federal Railways (OBB) looked at ‘Data Warehouses’ and said their composition depending on different types of information, such as assets, maintenance and measurement data, while in the engineering field it included track, catenary and civil engineering data.

He outlined OBB’s goals of a competitive and customer-oriented infrastructure in the heart of Europe; and an attractive, sustainable railway system: “Sustainable infrastructure needs a LCC-based asset management – what is necessary to achieve these goals?” he asked. “A modern and comprehensive asset data management.”

Data warehouses

He posed the question – What do we achieve with the LCM asset application and Data Warehouse?

“Step by step we receive a solution for objective planning of technical actions in the assets life cycle,” he said. “After this process we calculate our LCC and define what’s the right action at the right time. The vision is to develop the ultimate asset management – a combination of detailed technical prognosis and automatic economic evaluations.”

The Rhomberg Sersa Group’s Patricia Marty began her presentation by outlining ‘the dream’: the perfect railway infrastructure without the need of maintenance or renewal. But the reality, she added, was this: “Use the available money efficiently on the right spots; maintain forward-looking infrastructure based on factors such as politics, timetables, types of trains and standards; and intensive usage of the infrastructure versus maintenance/renewal.

“You need the knowledge of the railway experts, because if you only have the specialists doing calculations, you do not know within which parameters you have to work – it is essential to put these together,” she said.

Technical prognosis

Sersa Maschineller Gleisbau AG is operating a flexible and particularly user-friendly measurement database within the Rhomberg Sersa Rail Group. Customers rent precisely the area needed for their rail network within this ‘pool’ and keep hold of the data. Measurement data is stored and assessed independently of measurement providers. Using the measurement data, the experience and the local expertise of the railway experts upgrades and maintenance of their own rail network is planned.

Fault assessment of individual parameters is already standard. Rhomberg Sersa Rail Group even produces its own measurement data. Alongside the classic track parameters of rail geometry, gauge, crosslevel (cant) and twist the overhead contact wire is also measured.

The next stage is to evaluate delicate fault combinations. For example, exceeding the twist limit combined with a gauge that is too narrow and a high line speed is much more serious than if the limit of the twist occurs with an unremarkable gauge and low line speed.

She said that in some circumstances the combination of these parameters is so unfavourable that the individual parameter is not exceed and yet a dangerous situation still occurs on the network. It was their goal, she added, to offer the customer an algorithm with which the fault combinations outlined above can be recognised. In the future, far more measurement data will be available. Recognising the dangerous fault combinations will be the major task of the future.

Florian Pototschnig and Daniel Dotzl of Wienerlinien, the body responsible for public transport in Vienna, discussed some of the infrastructure challenges in getting people off private and onto public transport. Their presentation, Big Data and the The Benefits of Digitalisation’, highlighted data management as just one of several key factors governing the trend of digitalisation.

Data quality

“The big challenge is that people are coming to Vienna, and we need to build more underground, more lines, more buses,” said Pototschnig. “We have to renew, we have to maintain. This is what you have been talking about and maybe this is the link. Originally there was not so much money for us, and so we had to start thinking about our asset management.”

His colleague expanded on some of the elements of Big Data that will need to be, and in many cases are already being utilised.

“In the internet of things we see this happening,” said Dotzl. “They have information about their indicators, their technical assets. This is all about automated data collection, and another big issue for us as a public transport provider is open source data. We get traffic light information, addresses, and we add all this to our technical assets. It is not about data quantity, it is about data quality, and for this we need a good data management.”

The era of Big Data meanwhile will make freight wagons ‘virtually intelligent’, according to Peter Boom, whose company Voestalpine Signaling offers solutions with cutting-edge RFID technology that enables railway undertaking to organise their rolling stock more efficiently.

Freight supply chain

“In the rail industry we have so far not really used data in a way that brings the most benefits,” said Boom, an experienced rail logistician. With over 25 years in the business, much of it working on the interface between railway logistics and information, he believes that in the era of ‘big data’ the infrastructures are now being put in place to enable all stakeholders to share all of the available data right across the rail freight supply chain.

The University of Fumeo’s Emanuele Fumeo presented on Train Delay Prediction System for Large-Scale Railway Networks based on Big Data Analysis.

One part of this focused on the train delay prediction problem. If a train is at checkpoint A with 5 minutes of delay, what will its delay be at checkpoint B, and at checkpoint C? The current predictive methodology for train delay prediction is based the characteristics of the trains and the line, and on simple statistics. aimed at calculating the amount of time needed to complete a section of trip and using this to make predictions.

Train movements

But this method, said Fumeo, ‘is not able to take into account external factors affecting railway operations’.

In Italy every day, he added, there were 10,000 trains in service, each with an average of 12 checkpoints, and amounting to 120,000 train ‘movements’ and 10 gigabytes of messages.

Using a new model with improved algorithms, the new delay prediction tool outperformed the current one because it is designed to cope with large scale railway networks generating large amounts of data; and it could be integrated in a TMS solution to provide advanced and accurate forecasting of train delays.

The next research steps were, he added, to: find new railway data to be integrated in data-driven models; and consider the integration of other external data, such as passenger data, to ascertain whether further improvements are possible;

Visualising congestion

The conference’s final presentation came courtesy of Sei Sakairi of the East Japan Railway Company. In New Transport Arrangements Using ICT, Sakairi demonstrated how they had developed a prototype model which enabled train dispatch personnel to see congestion and delays in real time, and a smartphone app for passengers providing similar information.

The advantages of visualising congestion, he said, were that the dispatcher could operate directly in accordance with the train and provide information to passengers quickly. As well as utilising data on train locations, delays and passenger volumes, trains also have sensors which measure vehicle weight, thus making it possible to calculate how many passengers are aboard.

The positive results from the models, he showed, means they will be rolled out in spring 2017.

Author: Simon Weedy

Commenting on this post has been disabled.

Intelligent Rail Summit – Day Three | RailTech.com

Intelligent Rail Summit – Day Three

Its very name means the huge potential of ‘Big Data’ – the term we’ve come to know for data which is so large and difficult to process using traditional techniques – was the perfect platform for underpinning the key messages on the third and final day of the Intelligent Rail Summit 2016.

In his opening presentation, Gerard Kres of Siemens introduced how the company’s application of The Internet of Trains took the concept of Big Data analytics in rail, and examined how data can really be used to create real value for customers.

Breadth of data

The challenge, he said, was to ‘turn data information and drive appropriate actions’.

“Our customers want good products,” said Kres.”You have to have very different underlying IT structures. For me, ‘Big Data’ is not just about number of entries in a database, but it is about the breadth of the data.”

Kres gave some idea of the sheer amounts of raw data being managed, saying that rail vehicles today send between one and four billion data points per year, while at the same the rail infrastructure can send billions of internal messages. On top of this is data relating to work orders, spare parts, geography and weather.

“Every year we are getting 50-100 turbo bytes of data. You have to take the process, the algorithms and the software, and integrate it into our customers’ needs.

Kres also explained how, through Siemens Railigent platform, personnel is also a key driver to enabling efficient data interpretation. In order to implement the necessary digital services, he added, Siemens had built a large team of data scientists.

“We have a very international team working on this, which reflects the science at work. Within modern IT, you don’t find experts just around the corner but we have cutting edge experts to help implement these projects.”

He cited several case studies showing how tangible business value can be achieved. For example, component monitoring on the Thameslink Class 700 in the UK, especially for doors, has been shown to reduce delays and increase availability.

“Siemens has a unique differentiator,” he said “To ensure customer value creation, insight generation needs to combine data science with domain expertise – the more data it can give me the better.”

Diego Galar of Lulea University was equally keen to emphasise the scale of data being extrapolated and interpreted: “You can’t imagine the scale of the data we are going to be producing. All of this data goes somewhere and we want to get some benefits from it.”

His presentation, Railway Assets: a potential domain for big data analytics, began by posing the questions: Why Big Data in transportation? & How Did We Get Here?

He gave some scale to the size of the challenges faced, describing how ‘industrial and transportation data is a late but powerful entry’ to the data sharing world populated by the likes of social media. “Data generated by social media is nothing compared to what we are producing in railways, but there is a substantial difference – this is machine-generated data that we are dealing with. Information from humans is a tiny piece of information, and that is why we have to take this as a challenge,” added Galar. “An increasingly sensor-enabled and instrumented business environment generates huge volumes of data with machine-speed characteristics.”

Track-side condition monitoring

He cited examples of data collection in action such as track-side condition indicators providing offline information, and on-board indicators providing ‘real time’ online data on issues like load, vibration and temperature. Smart bearings, he said, were a good example of a sensor for condition monitoring.

“You can check the condition of the train, but also the track,” he said. “There is a double benefit, for the infrastructure manager and the rolling stock manager. We need this sharing of information. We have to fuse both information sources, because together they will deliver a much more valuable service.” He also described ‘prescriptive maintenance’ as ‘one step ahead of prediction’.

“Transportation data is becoming one of the largest domain for big data analytics and target of data science. Massive observations of certain processes do not assure the quality of deliveries. Lack of definition in the services expected by customers. We have the ingredients, but the customer has a huge lack of definition in the services. We don’t know what we can really get from it. Prognosis and diagnosis is vital.”

With his presentation, Enhanced points condition monitoring on Network Rail infrastructure, the University of Birmingham’s Louis Saade gave delegates an insight into his work with the UK’s rail network manager. Network Rail, he said, was moving towards an ‘intelligent asset’ management strategy, a key part of which is the ‘intelligent infrastructure programme’, using strategic remote condition monitoring (RCM) of assets to make a major contribution towards improving overall network performance in terms of reliability, safety and efficiency.

RCM, added Saade, was driving maintenance of fixed assets towards a ‘predict and prevent’ model, by detecting early deterioration and therefore reducing the potential for affecting services. The intelligent infrastructure currently monitors more than 42,000 critical assets including points, track circuits, points heaters, rail temperature and power supplies.

Delay minutes

“One of the most common causes of delay minutes attributed to Network Rail are points failures,” he said. “PCM has already been implemented, which triggers an alert or alarm if one of these monitored values exceeds a predetermined threshold: swing time, average current, peak current.”

A key part of this, he added, was equipping personnel with the necessary tools: “Engineers on the tracks can set relevant thresholds, perhaps taking into accounts certain elements, such as the seasons. We want to the system to provide the engineers with confidence, to give them something they can rely on, and something they can put their trust in.”

The main requirements of an Enhanced Points Condition Monitoring (ePCM) model are a system or collection of systems that can detect the majority of point machine faults; a system that works with the majority of machines; and a system that processes in a way understandable to engineers with low amounts of false alarms to build.

“We don’t want the knowledge to disappear,” said Saade. “We want to provide outcomes that people can understand and see in the outputs. The idea is also to have fewer false alarms by adopting these rules.”

One of the overriding messages of the MERMEC Group’s Pietro Pace was that railways have yet to fully realise the benefits of making best use of operational information collected, much of which is still ending  up in so-called ‘data graveyards’.

Condition monitoring – essentially the process of determining the condition of infrastructure and rolling stock – can, he asserts, provide crucial information that enables operators to optimise their management and maintenance of their assets. Historically, data has been collected, checked against the appropriate safety parameters and that has been it. It is only used once, and thereafter left in the ‘data graveyard’, never to be seen again.

Safety-critical defects

That, says Pace, is where condition monitoring can make a critical difference. Rather than just viewing data sets in isolation, condition monitoring measures against several parameters to identify any significant changes in assets or components that may, left unchecked, severely affect operations. Operators can take data from multiple conditioning devices and, through rigorous analysis, identify any safety-critical defects and recommend appropriate maintenance and renewal works.

Ultimately, he adds, it is all about enabling the railway sector to do things better and not create unnecessary additional work: “Any hardware device or data analytics software that is introduced in a railway organisation has to be considered not as a replacement of the human job but as a tool to allow operators to work smarter, and not harder.”

High quality asset data at your finger tips – that was the focus of Raymond Soeters of The Netherlands-based ProRail, who said their ambition, through the SpoorData (rail data) programme was to achieve ‘safe rail’ (zero avoidable accidents); ‘reliable rail’ (zero avoidable disruptions); punctual rail (further increases in punctuality) & ‘sustainable rail’ (less energy consumption & highest rung on the CO2 performance ladder).

“How are our ambitions related to the programme of  SpoorData?” he asked delegates. “I think they are absolutely intertwined and can make a difference, and we do that by being smarter. We need to understand the key factors that influence our performance, and this is why we started SpoorData. Two years ago we first had to face the problems that made our data not good enough.”

Standardised error codes

Soeter highlighted the distinction between Configuration Data: Where is it (geographical data), which function does it perform (functional data) & What is it? (physical data), versus Steering Data: What is the planned and realised maintenance (maintenance data), what is its condition (conditional data), & how does it perform (performance data).

He outlined what he saw as the top five key considerations for ‘keeping data structurally reliable’:

5 – ‘Crystal clear and lean information specifications’: “What do you need to do your job as quickly as possible?” said Soeter. “We challenge the data over and over again. But we were not finished. Next, we went to the contractors and explained the need for us to have the information. We agreed on the standardisation on error codes, on the maintenance codes, and on how conditional data is needed, for example on a switch.”

4 – One information model for the entire sector

3 – Digital verification: Outside = Inside. “If it is not detected, it is stored as a mismatch,” he said. “So for all those mismatches, we are solving problems by changing designs, and changing it outside.”

2: ‘Use the wisdom of the crowd’: “This is what we use from other organisations in The Netherlands. Present it so that it is visual to the outside world.”

1: ‘Together, with our partners’: “This is something we are very proud of,” he said. “We are completely independent from our partners. We have to find solutions. Years ago we are always fighting with our contractors. Now, you might not agree on anything but it is about working together.”

Steven Hagner of ABB Enterprise Software was equally keen to emphasise the ‘proactive, not reactive’ approach to managing the performance of rail assets: “We have a lot of coverage worldwide and it has given us this unique position – manufacturing on one side and software on the other. We bring a different way of seeing the rail world.”

In developing the types of software needed in the industry, one of the key concerns, he felt, was how the workplace had changed: “One of the bigger concerns is not so much people retiring, but it is the people coming in. They are never going to be doing the same job for 40 years, it just doesn’t happen any more. The dynamic has changed, and people are expecting different tools to do the job.”

Missed maintenance

The ‘evolution in the analytics world’ meant today’s personnel needed the support to make the crucial decisions.

“Give us some recommendations about what we can do,” he asked delegates. “What about missed maintenance? Nobody likes to talk about how maintenance schedules sometimes gets missed. It should be impacting the scores coming out of the systems.

“In the electronic transmission world, we are seeing a change. We are willing to put it in the cloud to leverage the benefits of a cloud platform. We are trying to keep the user interfaces as simple as possible. Personnel are now used to applications which work without being overly complicated. ”

It was vital, he said, to combine ‘intelligent rail’ with inspections and maintenance history; consolidate asset types in one system; and act on the results, by document issues, generating work orders, and prioritising maintenance and replacement.

“It takes time to get to the bigger savings, but when you get people to trust the information, then the results come out,” he added.

Georg Neuper of Austrian Federal Railways (OBB) looked at ‘Data Warehouses’ and said their composition depending on different types of information, such as assets, maintenance and measurement data, while in the engineering field it included track, catenary and civil engineering data.

He outlined OBB’s goals of a competitive and customer-oriented infrastructure in the heart of Europe; and an attractive, sustainable railway system: “Sustainable infrastructure needs a LCC-based asset management – what is necessary to achieve these goals?” he asked. “A modern and comprehensive asset data management.”

Data warehouses

He posed the question – What do we achieve with the LCM asset application and Data Warehouse?

“Step by step we receive a solution for objective planning of technical actions in the assets life cycle,” he said. “After this process we calculate our LCC and define what’s the right action at the right time. The vision is to develop the ultimate asset management – a combination of detailed technical prognosis and automatic economic evaluations.”

The Rhomberg Sersa Group’s Patricia Marty began her presentation by outlining ‘the dream’: the perfect railway infrastructure without the need of maintenance or renewal. But the reality, she added, was this: “Use the available money efficiently on the right spots; maintain forward-looking infrastructure based on factors such as politics, timetables, types of trains and standards; and intensive usage of the infrastructure versus maintenance/renewal.

“You need the knowledge of the railway experts, because if you only have the specialists doing calculations, you do not know within which parameters you have to work – it is essential to put these together,” she said.

Technical prognosis

Sersa Maschineller Gleisbau AG is operating a flexible and particularly user-friendly measurement database within the Rhomberg Sersa Rail Group. Customers rent precisely the area needed for their rail network within this ‘pool’ and keep hold of the data. Measurement data is stored and assessed independently of measurement providers. Using the measurement data, the experience and the local expertise of the railway experts upgrades and maintenance of their own rail network is planned.

Fault assessment of individual parameters is already standard. Rhomberg Sersa Rail Group even produces its own measurement data. Alongside the classic track parameters of rail geometry, gauge, crosslevel (cant) and twist the overhead contact wire is also measured.

The next stage is to evaluate delicate fault combinations. For example, exceeding the twist limit combined with a gauge that is too narrow and a high line speed is much more serious than if the limit of the twist occurs with an unremarkable gauge and low line speed.

She said that in some circumstances the combination of these parameters is so unfavourable that the individual parameter is not exceed and yet a dangerous situation still occurs on the network. It was their goal, she added, to offer the customer an algorithm with which the fault combinations outlined above can be recognised. In the future, far more measurement data will be available. Recognising the dangerous fault combinations will be the major task of the future.

Florian Pototschnig and Daniel Dotzl of Wienerlinien, the body responsible for public transport in Vienna, discussed some of the infrastructure challenges in getting people off private and onto public transport. Their presentation, Big Data and the The Benefits of Digitalisation’, highlighted data management as just one of several key factors governing the trend of digitalisation.

Data quality

“The big challenge is that people are coming to Vienna, and we need to build more underground, more lines, more buses,” said Pototschnig. “We have to renew, we have to maintain. This is what you have been talking about and maybe this is the link. Originally there was not so much money for us, and so we had to start thinking about our asset management.”

His colleague expanded on some of the elements of Big Data that will need to be, and in many cases are already being utilised.

“In the internet of things we see this happening,” said Dotzl. “They have information about their indicators, their technical assets. This is all about automated data collection, and another big issue for us as a public transport provider is open source data. We get traffic light information, addresses, and we add all this to our technical assets. It is not about data quantity, it is about data quality, and for this we need a good data management.”

The era of Big Data meanwhile will make freight wagons ‘virtually intelligent’, according to Peter Boom, whose company Voestalpine Signaling offers solutions with cutting-edge RFID technology that enables railway undertaking to organise their rolling stock more efficiently.

Freight supply chain

“In the rail industry we have so far not really used data in a way that brings the most benefits,” said Boom, an experienced rail logistician. With over 25 years in the business, much of it working on the interface between railway logistics and information, he believes that in the era of ‘big data’ the infrastructures are now being put in place to enable all stakeholders to share all of the available data right across the rail freight supply chain.

The University of Fumeo’s Emanuele Fumeo presented on Train Delay Prediction System for Large-Scale Railway Networks based on Big Data Analysis.

One part of this focused on the train delay prediction problem. If a train is at checkpoint A with 5 minutes of delay, what will its delay be at checkpoint B, and at checkpoint C? The current predictive methodology for train delay prediction is based the characteristics of the trains and the line, and on simple statistics. aimed at calculating the amount of time needed to complete a section of trip and using this to make predictions.

Train movements

But this method, said Fumeo, ‘is not able to take into account external factors affecting railway operations’.

In Italy every day, he added, there were 10,000 trains in service, each with an average of 12 checkpoints, and amounting to 120,000 train ‘movements’ and 10 gigabytes of messages.

Using a new model with improved algorithms, the new delay prediction tool outperformed the current one because it is designed to cope with large scale railway networks generating large amounts of data; and it could be integrated in a TMS solution to provide advanced and accurate forecasting of train delays.

The next research steps were, he added, to: find new railway data to be integrated in data-driven models; and consider the integration of other external data, such as passenger data, to ascertain whether further improvements are possible;

Visualising congestion

The conference’s final presentation came courtesy of Sei Sakairi of the East Japan Railway Company. In New Transport Arrangements Using ICT, Sakairi demonstrated how they had developed a prototype model which enabled train dispatch personnel to see congestion and delays in real time, and a smartphone app for passengers providing similar information.

The advantages of visualising congestion, he said, were that the dispatcher could operate directly in accordance with the train and provide information to passengers quickly. As well as utilising data on train locations, delays and passenger volumes, trains also have sensors which measure vehicle weight, thus making it possible to calculate how many passengers are aboard.

The positive results from the models, he showed, means they will be rolled out in spring 2017.

Author: Simon Weedy

Commenting on this post has been disabled.