AI is Getting Deregulated, What Does This Mean for Creatives?

Designers don’t write much about Regulation, and Regulation’s impact on Design doesn’t get much press. Regulation has a significant effect on design practitioners, one that will continue to increase over time. What does that mean? The incoming administration has signaled a sharp departure from the current approach to AI governance, promising to roll back existing regulations and foster a more hands-off approach to AI development.

The message is clear: a new era of deregulated AI is upon us.

This change presents opportunities and challenges for designers, particularly those in UX (User Experience) and ID (Industrial Design). Leaders at design organizations should be prepared to navigate an AI environment where innovation may speed up, but where guardrails around ethical and user-centered design could loosen.

The status of AI Deregulation

Rapid AI advancements have stirred concerns around ethics, privacy, and potential societal impact. Recognizing these risks, the Biden administration issued a comprehensive AI executive order in 2023, aiming to enforce standards in data protection, transparency, and accountability (Le Monde, 2024).

By comparison, Trump’s team argues that these regulations hinder innovation, his campaign has proposed repealing these protections to encourage growth and reduce friction for tech companies (Le Monde, 2024). Industry leaders like Elon Musk and Peter Thiel support this deregulatory approach, advocating that fewer restrictions on AI will allow the U.S. to maintain its competitive edge against global players like China (Financial Times, 2024).

While this may seem like a technical policy shift, the effects will ripple out across many sectors, including design. As AI’s capabilities and applications expand, understanding and adapting to the implications of deregulation becomes essential for decision-makers in creative industries.

Deregulation’s impact on Creatives

1. Increased Speed and Accessibility of AI Tools

With fewer regulatory constraints, tech companies will likely push AI tools to the market faster. This means increased access to cutting-edge AI technology sooner than expected for UX and ID designers. This acceleration could drive new possibilities in design processes, from automated ideation to advanced user testing and data analytics. Gyroscope invests time and resources in preparing for this scenario.

However, the speed of development could also mean fewer opportunities to test and validate these tools for ethical concerns, such as accessibility, user privacy, and algorithmic bias. Leaders will need to weigh the benefits of rapid innovation against the responsibility to protect end-users.

2. Expanded AI in Design Workflow

AI already supports aspects of UX and ID, particularly in data-driven insights and prototype creation. As regulatory controls ease, we can expect even more sophisticated AI tools entering the design landscape. Imagine a future state where AI-powered programs generate artifacts based on real-time behavioral data, predict trends, or customize interfaces for hyper-personalized experiences. AI may soon become an integral part of design workflow, offering unprecedented creative autonomy and efficiency.

But here’s the catch: with deregulation, there may be fewer standards around data use and model transparency. Will these tools respect user privacy and data integrity? As designers rely more on AI-driven insights, leaders and managers must assess their technology providers on an ongoing basis, ensuring they align with their values and commitment to user-centered design.

3. New Ethical Responsibilities for Leaders

One of the most significant risks of AI deregulation is the potential decline in ethical oversight. In a “wild west” scenario for AI, companies might roll out features that prioritize innovation over safety, inadvertently embedding harmful biases or ignoring privacy safeguards. Designers could find themselves navigating tools that, while powerful, lack transparency or have unknown ethical risks (The Australian, 2024). One might successfully argue that we are in currently living in this scenario.

Deregulation calls for heightened vigilance for design leaders. Now more than ever, leaders must act as ethical gatekeepers, setting standards for their teams and establishing clear criteria for adopting new technologies. By developing internal AI guidelines, design organizations can hold themselves accountable, aiming to ensure that their design choices empower rather than exploit users.

Proactive Strategies for Creative Leaders

If your design organization is incorporating AI tools, here are several strategies for managing the benefits and risks.

1. Establish AI Governance Policies

Define an internal code of conduct around AI use, focusing on transparency, fairness, and privacy. These policies can guide your teams when evaluating new tools, helping them assess whether the technology aligns with your organization’s ethical and user-centered values.

2. Prioritize Inclusivity

As tools become more data-driven, it is key to avoid designing for an “average” user. AI systems often reflect the biases inherent in their training data. To counter this, encourage your teams to adopt inclusive design practices, consider diverse user groups, and validate/invalidate AI-driven insights against real user feedback to ensure the technology enhances inclusivity rather than reinforces exclusion.

3. Demand Transparency from Tech Providers

Clients and users will demand transparency as they become more informed about AI. Gyroscope encourages design organizations to question how vendors handle data, mitigate bias, and make their algorithms transparent. Buyers can exert influence by opting for tech providers who commit to ethical practices, driving demand for responsible AI. The buyer votes with their wallet even in a marketplace with limited options.

4. Invest in AI Literacy for Your Team

Gyroscope brings AI literacy and competency to your business through learning by doing. With AI tools becoming more integral to design, ongoing education is key. We recommend investing in workshops and training sessions that increase your team’s understanding of AI technology, focusing on its capabilities and ethical implications. By building internal AI literacy, your team will be better equipped to leverage AI responsibly and make informed choices about its use in design processes.

The Opportunity to Lead Ethically

While deregulation presents challenges, it also allows design leaders to set an example. By adopting proactive, user-focused standards, organizations can model ethical AI adoption. Decision-makers in design organizations have an opportunity to position themselves as advocates for responsible innovation, proving that AI can be both groundbreaking and considerate of human impact.

Looking Forward

AI’s rapid advancement and the deregulation wave create a pivotal moment for creative industries. Leaders in UX and ID are responsible for approaching this change strategically, ensuring that as we embrace AI’s capabilities. In doing so, we remain grounded in the values that make design impactful. AI will continue to reshape the creative field, but it’s up to organization leaders to ensure it aligns with user-centered and ethical design principles.

For those navigating this journey, expertise and guidance are essential to thread that particular needle. At Gyroscope we remain fully committed to driving value and actionable innovation through AI. If you’d like to learn more, reach out to us here, or through our website

Sources:

1. Le Monde (2024). Artificial intelligence: Trump and Vance want less regulation, more power, to counter China.

2. Financial Times (2024). Has Silicon Valley gone MAGA?

3.The Australian (2024). Trump win sparks ‘wild west’ tech warning.

Previous
Previous

The speed limit for emerging technology

Next
Next

what even is AI?