Google, following at the heels of OpenAI, revealed a coverage proposal based on the Trump management’s name for a countrywide “AI Motion Plan.” The tech massive counseled susceptible copyright restrictions on AI coaching, in addition to “balanced” export controls that “give protection to nationwide safety whilst enabling U.S. exports and international trade operations.”
“The U.S. must pursue an energetic global financial coverage to recommend for American values and give a boost to AI innovation across the world,” Google wrote within the report. “For too lengthy, AI policymaking has paid disproportionate consideration to the hazards, frequently ignoring the prices that erroneous law may have on innovation, nationwide competitiveness, and medical management — a dynamic this is starting to shift below the brand new Management.”
One in every of Google’s extra arguable suggestions relates to using IP-protected subject material.
Google argues that “truthful use and text-and-data mining exceptions” are “important” to AI construction and AI-related medical innovation. Like OpenAI, the corporate seeks to codify the fitting for it and competitors to coach on publicly to be had records — together with copyrighted records — in large part with out restriction.
“Those exceptions permit for using copyrighted, publicly to be had subject material for AI coaching with out considerably impacting rightsholders,” Google wrote, “and steer clear of frequently extremely unpredictable, imbalanced, and long negotiations with records holders all through mannequin construction or medical experimentation.”
Google, which has reportedly skilled numerous fashions on public, copyrighted records, is struggling with complaints with records house owners who accuse the corporate of failing to inform and compensate them prior to doing so. U.S. courts have not begun to come to a decision whether or not truthful use doctrine successfully shields AI builders from IP litigation.
In its AI coverage proposal, Google additionally takes factor with sure export controls imposed below the Biden management, which it says “would possibly undermine financial competitiveness objectives” by way of “implementing disproportionate burdens on U.S. cloud carrier suppliers.” That contrasts with statements from Google competition like Microsoft, which in January mentioned that it was once “assured” it will “comply totally” with the foundations.
Importantly, the export regulations, which search to restrict the provision of complicated AI chips in disfavored nations, carve out exemptions for depended on companies looking for massive clusters of chips.
In other places in its proposal, Google requires “long-term, sustained” investments in foundational home R&D, pushing again in opposition to fresh federal efforts to scale back spending and get rid of grant awards. The corporate mentioned the federal government will have to liberate datasets that may well be useful for industrial AI coaching, and allocate investment to “early-market R&D” whilst making sure computing and fashions are “extensively to be had” to scientists and establishments.
Pointing to the chaotic regulatory atmosphere created by way of the U.S.’ patchwork of state AI regulations, Google suggested the federal government to move federal law on AI, together with a complete privateness and safety framework. Simply over two months into 2025, the collection of pending AI expenses within the U.S. has grown to 781, in line with a web based monitoring device.
Google cautions the U.S. govt in opposition to implementing what it perceives to be laborious duties round AI programs, like utilization legal responsibility duties. In lots of instances, Google argues, the developer of a mannequin “has little to no visibility or keep watch over” over how a mannequin is getting used and thus shouldn’t endure duty for misuse.
Traditionally, Google has adversarial regulations like California’s defeated SB 1047, which obviously laid out what would represent precautions an AI developer will have to take prior to liberating a mannequin and through which instances builders may well be held chargeable for model-induced harms.
“Even in instances the place a developer supplies a mannequin immediately to deployers, deployers will frequently be easiest positioned to know the hazards of downstream makes use of, enforce efficient chance control, and behavior post-market tracking and logging,” Google wrote.
Google in its proposal also known as disclosure necessities like the ones being pondered by way of the EU “overly extensive,” and mentioned the U.S. govt will have to oppose transparency regulations that require “divulging business secrets and techniques, permit competition to replicate merchandise, or compromise nationwide safety by way of offering a roadmap to adversaries on methods to circumvent protections or jailbreak fashions.”
A rising collection of nations and states have handed regulations requiring AI builders to expose extra about how their programs paintings. California’s AB 2013 mandates that businesses growing AI programs put up a high-level abstract of the datasets that they used to coach their programs. Within the EU, to conform to the AI Act as soon as it comes into pressure, firms should provide mannequin deployers with detailed directions at the operation, boundaries, and dangers related to the mannequin.