Innovating in the era of Trustworthy AI
Last August 2021 BDVA released his community position to the proposal for AI regulation supporting the idea of balancing between regulation and innovation. It is acknowledged the importance of trust and trustworthiness to boost investment and innovation. However, the cost, time, infrastructure, and knowledge needed to e.g. comply with the AI regulation may be burdensome. There is a risk that small companies, research and education organizations might not be able to easily follow and be affected negatively e.g. by a lack of sufficiently qualified personnel, resulting in hard implementation that can hinder innovation and competitiveness. The BDVA recommendations also support for the establishment of safe environments for better testing and experimentation because access to knowledge, testing, experimentation, and ecosystem are key factors for success.
BDVA called up for the importance of investing in federated experimental networks such as the European Federation of i-Spaces or Big Data Innovation Hubs. These networks can not only provide access to knowledge, innovation, testing and ecosystem to SMEs and start-ups in the context of the new AI regulation but can also offer opportunities to legislators in supporting collaboration between innovators and standardisation experts.
The understanding of challenges and opportunities to develop a truly trustworthy AI ecosystem and boost innovation has evolved a lot in the last few months, as this topic has been widely reflected and discussed in Europe and worldwide. Through this session we intend to take the pulse to the current status, getting input from regulatory experts, research projects, and organisations supporting AI experimentation, innovation and adoption.