Zebra Technologies Corporation

12/09/2024 | Press release | Distributed by Public on 12/09/2024 11:34

Walking the Talk: How We’re Engaging in Responsible AI Practices at Zebra

By Lee McLean | December 9, 2024

Walking the Talk: How We're Engaging in Responsible AI Practices at Zebra

This is how we're working to build trust in our systems and help protect your interests when using any Zebra technology solutions that leverage AI.

After my last post about responsible AI practices, some people rightfully inquired about Zebra's practices. So, I thought it would be best to answer these questions publicly.

The first thing I want to stress is that our underlying Code of Conduct, company policies, and departmental procedures provide the foundation for everything we do at Zebra. We don't feel AI should be treated any differently than any other technology and definitely shouldn't be treated as an exception to existing policies.

We always make a concerted effort to…

  • think and act customer first.
  • strive to make a positive impact.
  • lead through innovation.
  • deliver excellence with agility.

And we certainly lean on AI to live these values through organic research and development (R&D), strategic acquisitions, and our extensive partner ecosystem.

However, we have also embraced policies and practices that help ensure every Zebra AI solution we design, test, implement, use or deploy adheres to Zebra's AI principles of accountability, transparency, and ethical purpose.

Zebra works to ensure each AI model and system is supported and informed by human decision-making, and we are continuously evaluating and refining our responsible AI methodology in terms of ethics, development and deployment. Additionally, we continue to evolve AI-related processes, principles, tools and training while ensuring consistency and compliance through an internal hub-and-spoke governance model, much like we do for cybersecurity.

For example, we are in the process of developing a set of Digital Ethics Principles that will supplement Zebra's Code of Conduct and steer our AI (as well as any other technology) engagements so that we are consistently working to:

  • Avoid harm
  • Protect data and privacy
  • Mitigate unfair bias
  • Support society and human rights
  • Be transparent and trustworthy

These are complemented by our foundational Responsible AI Principles of accountability, ethical purpose, and transparency.

We are putting these principles into practice by supporting our business and technical leads at the inception of new projects to ensure we provide AI-powered tools and solutions to our customers that can be safely used within the emerging legal and ethical frameworks. We also review new AI tools and examine the use case for each tool - understanding the terms and conditions as well as the real-life implications of how a tool will be leveraged within Zebra or by partners and customers. This review and early intervention not only helps protect Zebra intellectual property but also helps protect the data, outputs and potential impact to people.

We recognize that the development and deployment of AI is a journey and that we can't wait for, nor rely solely on, government or industry-level policies to decree the necessity of ethical practices. Therefore, it is our responsibility - yours and mine - as innovators, academics, technologists, researchers, business leaders, lawyers, policy makers, and simply human beings to ask and answer the difficult questions around the transparent use of AI.

At Zebra, my colleagues and I will always embrace the hard questions and search for answers for the benefit of our end-users, customers, partners, and employees. We will also continue to contribute to the exploration of meaningful responsible AI practices by sharing our AI expertise, participating in the regulatory process, and conducting research in cooperation with customers, partners, industry associations and research institutions.

Why?

Understanding how to responsibly train, test, manage, and use AI is the key to positively augmenting people's lives, and we want to help make it easier for you to leverage technologies that support ethical and responsible AI principles when you're putting end-to-end solutions in place across your organization.

###

What to read and watch next:

How Easy Is It to Put "Responsible AI" Into Practice?

Responsible AI development, training, testing and use requires ongoing engagement. As with all advancing technologies, focusing on, and foundational practices will be critical to ensuring long-term success and ethical implementation.

Plan to Put AI to Work for Your Business? Just Make Sure There's Always a Human in the Loop.

Though AI can work more quickly than people can in some cases, it still takes its cues from us. That's why you always need to keep a human in the loop. But that's not the only reason why you shouldn't let an AI work completely independently. Zebra's Senior Director of AI and Advanced Technologies sits down with the Interim CEO of Humans in the Loop to talk more about why human oversight will always be needed with AI systems and if there's ever a time an AI system can be left to work autonomously.

Topics
Blog, Inside Zebra Nation, AI, Field Operations, Public Sector, Healthcare, Manufacturing, Retail, Transportation and Logistics, Warehouse and Distribution, Hospitality, Banking, Energy and Utilities,

Zebra Developer Blog

Zebra Developer Blog

Are you a Zebra Developer? Find more technical discussions on our Developer Portal blog.

Zebra Story Hub

Zebra Story Hub

Looking for more expert insights? Visit the Zebra Story Hub for more interviews, news, and industry trend analysis.

Search the Blog

Search the Blog

Use the below link to search all of our blog posts.

Most Recent