World News
Steg.AI puts deep learning on the job in a clever evolution of watermarking on August 1, 2023 at 11:01 am
Published
1 year agoon
By
Watermarking an image to mark is one’s own is something that has value across countless domains, but these days it’s more difficult than just adding a logo in the corner. Steg.AI lets creators embed a nearly invisible watermark using deep learning, defying the usual “resize and resave” countermeasures.
Ownership of digital assets has had a complex few years, what with NFTs and AI generation shaking up what was a fairly low-intensity field before. If you really need to prove the provenance of a piece of media, there have been ways of encoding that data into images or audio, but these tend to be easily defeated by trivial changes like saving the PNG as a JPEG. More robust watermarks tend to be visible or audible, like a plainly visible pattern or code on the image.
An invisible watermark that can easily be applied, just as easily detected, and which is robust against transformation and re-encoding is something many a creator would take advantage of. IP theft, whether intentional or accidental, is rife online and the ability to say “look, I can prove I made this” — or that an AI made it — is increasingly vital.
Steg.AI has been working on a deep learning approach to this problem for years, as evidenced by this 2019 CVPR paper and the receipt of both Phase I and II SBIR government grants. Co-founders (and co-authors) Eric Wengrowski and Kristin Dana worked for years before that in academic research; Dana was Wengrowski’s PhD advisor.
While Wengrowski noted that though they have made numerous advances since 2019, the paper does show the general shape of their approach.
“Imagine a generative AI company creates an image and Steg watermarks it before delivering it to the end user,” he wrote in an email to TechCrunch. “The end user might post the AI-generated image on social media. Copies of the deployed image will still contain the Steg.AI watermark, even if the image is resized, compressed, screenshotted, or has its traditional metadata deleted. Steg.AI watermarks are so robust that they can be scanned from an electronic display or printout using an iPhone camera.”
Although they understandably did not want to provide the exact details of the process, it works more or less like this: instead of having a static watermark that must be awkwardly layered over a piece of media, the company has a matched pair of machine learning models that customize the watermark to the image. The encoding algorithm identifies the best places to modify the image in such a way that people won’t perceive it, but that the decoding algorithm can pick out easily — since it uses the same process, it knows where to look.
The company described it as a bit like an invisible and largely immutable QR code, but would not say how much data can actually be embedded in a piece of media. If it really is anything like a QR code, it can have a kilobyte or three, which doesn’t sound like a lot, but is enough for a URL, hash, and other plaintext data. Multiple-page documents or frames in a video could have unique codes, multiplying this amount. But this is just my speculation.
Steg.AI provided multiple images with watermarks for me to inspect, some of which you can see embedded here. I was also provided (and asked not to share) the matching pre-watermark images; while on close inspection some perturbations were visible, if I didn’t know to look for them I likely would have missed them, or written them off as ordinary JPEG artifacts.
Here’s another, of Hokusai’s most famous work:
You can imagine how such a subtle mark might be useful for a stock photography provider, a creator posting their images on Instagram, a movie studio distributing pre-release copies of a feature, or a company looking to mark its confidential documents. And these are all use cases Steg.AI is looking at.
It wasn’t a home run from the start. Early on, after talking with potential customers, “we realized that a lot of our initial product ideas were bad,” recalled Wengrowski. But they found that robustness, a key differentiator of their approach, was definitely valuable, and since then have found traction among “companies where there is strong consumer appetite for leaked information,” such as consumer electronics brands.
“We’ve really been surprised by the breath of customers who see deep value in our products,” he wrote. Their approach is to provide enterprise-level SaaS integrations, for instance with a digital asset management platform — that way no one has to say watermark that before sending it out; all media is marked and tracked as part of the normal handling process.
An image could be traced back to its source, and changes made along the way could conceivably be detected as well. Or alternatively, the app or API could provide a confidence level that the image has not been manipulated — something many an editorial photography manager would appreciate.
This type of thing has the potential to become an industry standard — both because they want it and because it may in the future be required. AI companies just recently agreed to pursue research around watermarking AI content, and something like this would be a useful stopgap while a deeper method of detecting generated media is considered.
Steg.AI has gotten this far with NSF grants and angel investment totaling $1.2 million, but just announced a $5 million A round led by Paladin Capital Group, with participation from Washington Square Angels, the NYU Innovation Venture Fund, and angel investors, Alexander Lavin, Eli Adler, Brian Early and Chen-Ping Yu.
Watermarking an image to mark is one’s own is something that has value across countless domains, but these days it’s more difficult than just adding a logo in the corner. Steg.AI lets creators embed a nearly invisible watermark using deep learning, defying the usual “resize and resave” countermeasures. Ownership of digital assets has had a
You may like
News
Humans Need Not Apply: The AI Candidate Promising to Disrupt Democracy
Published
4 months agoon
June 15, 2024The rise of AI Steve, the artificial intelligence candidate running for a seat in the UK Parliament, has sparked a heated debate about the role of AI in governance and the potential disruption it could bring to traditional democratic processes.
Steven Endacott, the human force behind AI Steve, envisions his AI co-pilot as a conduit for direct democracy, enabling constituents to engage with the AI, share concerns, and shape its policy platform through a voting system of “validators.” Endacott has pledged to vote in Parliament according to the AI’s constituent-driven platform, even if it conflicts with his personal views.
Proponents argue that AI Steve can revolutionize politics by bringing more voices into the process and ensuring that policies truly reflect the will of the people. They claim that an AI candidate can engage in up to 10,000 conversations simultaneously, allowing for unprecedented levels of public participation and input.
However, critics raise valid concerns about transparency, accountability, and the potential for AI systems to be manipulated or influenced by their creators, data limitations, or external actors. There are also questions about whether an AI can fully grasp the nuances and human elements involved in complex political issues.
Some argue that AI Steve is merely a clever marketing ploy to garner attention and votes, rather than a genuine effort to “humanize” politics. There are fears that the use of AI in elections could undermine faith in electoral outcomes and democratic processes if voters become aware of potential scams or manipulation.
Beyond the specific case of AI Steve, the rise of AI candidates and the increasing use of AI in political campaigns and elections raise broader questions about the integrity of democratic systems and the need for effective regulations and guidelines.
Anti-democratic actors and authoritarian regimes may seek to exploit AI technologies for censorship, surveillance, and suppressing dissent under the guise of enhancing governance. There are also concerns about the potential for an “AI arms race” between political parties to develop and deploy the most sophisticated AI technologies, further eroding public trust.
As AI tools become more advanced and accessible, upholding electoral integrity will require proactive efforts to establish guardrails, transparency measures, and accountability frameworks around their use in politics. Policymakers, advocates, and citizens must work together to ensure that AI is leveraged as a force for a better and more inclusive democracy, rather than a tool for manipulation or consolidation of power.
The rise of AI candidates like AI Steve serves as a wake-up call for democratic societies to grapple with the implications of artificial intelligence in governance and to strike the right balance between harnessing its potential benefits and mitigating its risks to the democratic process.
Stay Connected
Author: Bolanle Media Staff
Saudi Arabia is reportedly considering abandoning the US dollar for oil trade settlements, a move that could shake the foundations of the global financial system. For decades, the petrodollar system has propped up the dollar’s status as the world’s reserve currency, with Saudi Arabia insisting on dollar payments for its vast oil exports.
However, recent comments from Saudi officials hint at exploring alternatives to the dollar amid growing tensions with the US over various geopolitical issues and the rise of economic powerhouses like China.
Implications of a Petrodollar Shift
If Saudi Arabia abandons the petrodollar, the implications could be significant:
1. Dollar Dominance Eroded: The dollar’s reserve currency status could weaken, potentially leading to a decline in its value.
2. Global Financial Instability: A sudden shift could trigger volatility in global markets as investors adjust portfolios.
3. Geopolitical Realignment: The move could signal Saudi alignment with China and challenge US economic hegemony.
Challenges and Uncertainties
While the prospect is significant, challenges remain:
1. Finding a suitable alternative currency with the dollar’s liquidity and stability.
2. Potential economic disruption for Saudi Arabia and trading partners.
3. Political backlash and strained relations with the US and allies.
As the world watches, it remains uncertain whether Saudi Arabia’s comments signal a negotiating tactic or a profound shift in the global financial order.
Author: Bolanle Media Staff
X, the social media platform formerly known as Twitter, has made a significant policy shift by officially permitting adult content on its platform with some restrictions and guidelines.
In an update to its rules, X stated that users can now share “consensually produced and distributed adult nudity or sexual behavior” as long as it is properly labeled and not prominently displayed in areas like profile pictures or header images.
“We recognize that many of our users are adults who want to freely express themselves by sharing legal adult content,” said an X spokesperson. “At the same time, we have a responsibility to protect minors and prevent exposure to explicit material without proper labeling.”
Under the new guidelines, users who “regularly post” adult content must adjust their settings to automatically mark images and videos as sensitive content, which blurs or hides the media by default. By default, users under 18 or who haven’t entered their birth date cannot view this sensitive adult content.
The policy prohibits content “promoting exploitation, nonconsent, objectification, sexualization or harm to minors, and obscene behaviors.” It applies to all adult content, whether photographic, animated, or AI-generated.
X has stated that it will monitor user-generated content and adjust account settings for those who fail to properly mark pornographic posts. Similar rules and enforcement will apply to violent content as well.
The move aligns X with Apple’s app store guidelines, which allow apps with adult content as long as it is hidden by default and behind proper age gates and content warnings.
While adult content was already present on X, this policy update officially permits and regulates it, aiming to balance freedom of expression for consenting adults with protecting minors from exposure to explicit material.
However, enforcing these rules consistently may prove challenging for X’s reduced content moderation teams following recent layoffs and cost-cutting measures.
The policy shift has drawn mixed reactions, with some praising X for embracing adult expression while others raise concerns about the potential for the platform to become inundated with pornographic content despite the restrictions.
As X navigates this new territory, the effectiveness of its labeling requirements, age verification measures, and content moderation efforts will be closely watched by users, regulators, and advocacy groups alike.
Stay Connected
Author: Bolanle Media Staff
Trending
- Entertainment3 weeks ago
“Her Corpse”: A Darkly Hilarious Take on True Crime
- Entertainment4 weeks ago
“Another Night at Beaver’s”: A Wild Fiery Story of an Apocalypse
- Entertainment1 week ago
“Happy Landing”: A Story of Humor in Aging
- Entertainment2 weeks ago
“Mutt”: A Story of Identity, Belonging, and Laughter
- Entertainment2 days ago
Theresa Romaniec: A Rising Star in Screenwriting
- Entertainment3 weeks ago
Sock It To Me: Comedy Meets Thriller
- Spotlight2 weeks ago
Miachel Pruett: Online Dating and Comedy in ‘Try a Waffle Cone’
- Entertainment4 weeks ago
“Cinching Saddles”: Roping in the Laughs