On 14 June 2023, the European Union Parliament voted in favor of the first major artificial intelligence guidelines draft. Known as the EU AI Act, it is currently the clearest step towards a worldwide law governing the use of artificial intelligence. While the United States has hearings to figure out the next legislative move, the European Union bounded forward into the proverbial burning building.
Image designed in Midjourney AI
The fascinating part is that this isn’t new: An earlier EU draft was proposed in April 2021. At the time, Open AI CEO Sam Altman had recently taken the helm and the flagship project, ChatGPT, wouldn’t be available to the public for months.
The EU AI Act essentially takes a three-pronged approach. First, high-risk applications are outright banned. “High-risk” seems to be measured on their potential societal impact and, consequently, potential damage. For instance, AI facial recognition tools could be used and abused by police departments, as well as organizations with less-than-savory intentions.
Second, AI companies will have to make summaries of what the content they are using to create their databases. The intention not only gives AI customers insight into what’s being fed into the proverbial black box, but also gives active copyright holders the opportunity to protest their content being included in the databases.
The third prong is implied: Government approval. If high-risk applications will be banned, then there needs to be a party that will determine what, exactly, is high risk. As American leaders suggested in recent hearings, an international regulatory body makes more sense than individual governments. Think CERN for space exploration, or ICANN for internet fairness.
Legal and regulatory safeguards act as a bulwark against the limits of self-regulation. However, even the EU AI Act, the most aggressively pushed of them all, isn’t law yet. The AI industry will still have its blind spots well into the near future.
Content creators should pay attention sharply. The AI platforms you lean on may end up changing on a dime when regulations set in, and, at this point, those regulations may begin on local or federal level. Consider a few years ago when EU-specific privacy laws impacted Internet marketers worldwide.
If the EU AI Act becomes the standard, then creators will have the option to remove their content from the AI databases – perhaps even through lawsuits. While the AI companies may be able to quickly pivot their database, their customers (i.e.: us) may be stuck with whatever output incorporated the now-illegal content before the removal request.
A good reference point is the crypto industry: the valuable goods fueling a company’s rocket to the moon may be worthless by the time government regulation lands. Long-term innovation will likely come from those recognising the guardrails before they are enforced – including content creators.
Regardless of what happens in the near future, AI is a powerful tool that needs time and skill to be wielded.
Immedia understands that good content comes from the symbiotic relationship between A.I. and people. With the Contrend platform and Immedia Content, our proprietary analytic system will identify gaps in your specific content landscape, be it topic, style, format, genre, tone, style or length. We then enable and empower you to create a targeted and bespoke content strategy to open new marketing frontiers for you. We’re ready to talk anytime!
Drop us an email at info@contrend.com.