Thousands of athletes attending the Paris 2024 Olympics will be able to use their mobile phones to text questions to a customized chatbot and quickly get texted responses. The games officially start July 25.
Intel worked with the International Olympic Committee to develop the chatbot, Athlete365, which will offer info on how to navigate the Olympic Village in Paris as well as responses on-demand regarding rules and guidelines.
In an online demonstration with reporters, Intel and IOC official showed how the chatbot quickly responded to a text query asking whether posting photographs from Olympic venues is allowed during the games. The response from the chatbot was fast, including a caution that posting photos on personal websites and social media accounts is allowed as long as athletes don’t post commercial content, AI-generated content or recordings from restricted areas. The response even included a link to documents describing social media guidelines.
Intel is using the chatbot to show off its prowess in working with industry and developers in open generative AI and a retrieval-augmented generation (RAG) system. In this case, the RAG approach is powered by Intel Gaudi accelerators and Xeon processors and Intel is focused heavily on open AI systems and platforms.
The chatbot’s value to the IOC is designed to improve communications, using six major languages that 11,000 athletes in Paris will be able to access. “The challenge is the volume of information,” said Kaveh Mehrabi, IOC Director of Athletes. “As good as access to information can be, it can also be overwhelming.”
Intel’s Justin Hotard, general manager of the data center and AI group, described the partnership with IoC as a way to demonstrate Intel’s dedication to making AI accessible. That approach includes collaborations with partners like Red Hat and Seekr. Intel’s GenAI approach is built on a production-ready RAG system built on the Open Platform for Enterprise AI (OPEA) foundation. It is designed to be highly flexible and customizable with components from various OEM systems and industry partners.
Intel’s GenAI approach integrates OPEA microservices into a RAG system designed for Xeon and Gaudi AI, which works with Kubernetes and Red Hat Open Shift, among others, to provide standard APIs with security and telemetry. Intel worked with OPEA to build its open software stack for RAG and LLM deployment with PyTorch, HuggingFace serving libraries, LangChain and Redis Vector database. Intel will demonstrate its AI systems approach at Intel Innovation, Sept. 24-25.