Q: At Sensors Converge 2023, you are delivering a keynote address on “AI in mission critical cyber-physical systems” and our conference preview mentions the term “cyber-physical continuum.” Can you please define the term and, importantly, does the cyber-physical continuum pose greater challenges than we’ve faced in the recent past?
Fersman: The term cyber-physical continuum originated from embedded systems where each embedded physical controller of a bigger system has a corresponding piece of code defining its behavior. Cyber-physical systems are networked embedded systems, where the interactions between the physical and the cyber part play an important role. Today, physical systems can be represented by digital twins that reflect every state of the physical system. The beauty of it is that to guarantee if the system adheres to wanted properties such as functional properties or trustworthiness, one does not need to run tests on the physical system but rather simulate and verify the cyber part of it. The more granular the cyber-part of the system describes the physical part, the better control over the physical part we will have. Here, communication between the cyber part and the physical part play an important role because the two parts of the cyber-physical continuum are supposed to mirror each other in real-time.
Q: We’ve always wanted an interplay of the parts in a system, but has it become suddenly much more onerous or complex? And do you have insights on what problems or dangers we can see without the proper interplay? Are there notable privacy/security examples you can name?
Fersman: As things become increasingly interconnected, the sizes of systems grow. One can imagine going from modeling and analysis of a single controller in car, to a model of a full car with hundreds of such controllers, to a model of a fleet of cars, to an intelligent transport system where the cars interact with road infrastructure and the telecom network, to a smart city. Now imagine that you are trying to verify correctness of the whole system down to the controller functionality level – this is a highly complex problem. Exposing physical functionality through software APIs offers great opportunities but also involves both security and privacy threats; therefore, mechanisms for preventing security attacks need to be in place from day one. In addition, every time AI algorithms are used in such a system, we need to make sure that it adheres to trustworthy AI principles such as explainability and non-bias.
Q: You have a huge topic, but AI has come front and center for years and especially since December with ChatGPT and LLMs and generative AI. Is the pace of adoption itself perhaps leading to less of a concern for basics like safety, privacy and security? Are you worried that might be so?
Fersman: The introduction of ChatGPT has indeed created a new wave of hype, awareness of what AI is capable of, and, at the same time, increased worry in society. OpenAI released GPT-3 a while ago and many companies including Ericsson have been working with generative AI/LLM for several years. These are powerful tools, and, as in other AI methods, they won’t work in a proper way if they are fed with skewed data. This increased awareness of AI’s capabilities calls for increased focus on methods for guaranteeing trustworthiness properties of AI methods.
Q: Efficiency of AI and optimizing performance is the goal of many AI apps, but are we seeing high degrees of optimal performance for cyber-physical? Is it anything like the five 9’s capability? Is that even the goal?
Fersman: Resilient telecom networks with high-level of availability are achievable even without AI. The role of AI in telecom is to achieve higher performance given the same resources, in terms of quality of service as well as CO2 emissions, as well as to optimize network operations to achieve lower degrees of downtime in a preventive fashion.
Q: Would Ericsson or your partners be working on something like what Nvidia recently proposed, software called NeMo Guardrails which is designed to be sure emerging apps powered by LLMs are accurate, appropriate, on topic and secure. Our story looked at the concept.
Fersman: Trustworthy methods for guardrailing any AI technique, including LLMs, deployed in production are of highest importance. Ericsson has adopted the European Union’s ethics guidelines for trustworthy AI, including human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability. Our techniques for trustworthy AI are being applied to AI algorithms that are being deployed in our products and services, including LLM-based algorithms.
Q: Is there a standard that makes sense here? I don’t think relying on Nvidia or another company will satisfy all needs, right?
Fersman: While standards are always playing an important role for telecom infrastructure vendors to achieve interoperability, in trustworthy AI applications it’s rather certifications and regulations that play a role. The telecom system does indeed partially rely on our partners such as Nvidia, but to achieve highest quality guarantees for the whole system that’s being offered to consumers and enterprises, we make sure to validate and internally certify our AI methods, and in a similar fashion it’s being done with software that we develop. Technology for this certification is however different from certification of software.
Q: While this might be off topic, what do you think about federal legislation that would generally protect data privacy for persons? That is a sweeping approach that some vendors might find hard to meet. If individuals have the right to all their data, how would that ever lead to AI training?
Fersman: An important guiding principle is that individuals need to be in control of their data. The level of understanding of the impacts of AI in the society is increasing, and it leads to better understanding of advantages of data sharing, as well as more clear understanding of what data is not to be shared. In order to optimize telecom systems with the help of AI we often use synthetic data and aggregated data about users.
Q: Finally, how do you answer the key question posed in the keynote blurb that asks how AI can be used to optimize performance of systems? Are there real world examples yet?
Fersman: We have many real-world examples today. Just like with telecommunication, you only notice it when it does not work. AI-powered systems prevent quality of service degradations and proactively optimize those cells. AI-powered site inspections use drones and image recognition to detect possible failures and decrease the amount of unnecessary tower climbing. We use AI to optimize infrastructure utilization, increase performance and minimize CO2 emissions of telecom networks.
Elena Fersman, PhD, is vice president and head of global AI accelerator, Ericsson, and adjust professor of cyber-physical systems, Ericcson. She delivers her keynote address at Sensors Converge on June 21 at 11 PT. Sensors Converge 2023 in Santa Clara, California, runs June 20-22, and registration is online.