• Police seek suspects in deadly birthday party shooting
  • Lawmakers launch inquires into U.S. boat strike
  • Nov. 29, 2025, 10:07 PM EST / Updated Nov. 30, 2025,…
  • Mark Kelly says troops ‘can tell’ what orders…

Be that!

contact@bethat.ne.com

 

Be That ! Menu   ≡ ╳
  • Home
  • Travel
  • Culture
  • Lifestyle
  • Sport
  • Contact Us
  • Politics Politics
☰

Be that!

'I'm all for it': Trump says he would sign bill to release Epstein files

admin - Latest News - November 17, 2025
admin
13 views 26 secs 0 Comments



President Trump told reporters he would be willing to sign a bill forcing the Department of Justice to release files on Jeffrey Epstein if it passes Congress. The president added that he did not want the focus on Epstein to “detract from the great success” of the Republican party.



Source link

TAGS:
PREVIOUS
Hundreds of sheep cross historic German city
NEXT
Trump says he would sign bill to release Epstein files
Related Post
October 7, 2025
Oct. 7, 2025, 4:42 PM EDTBy Angela YangTaylor Swift’s fans are used to scouring her videos and social media posts for hidden messages about her albums.But after the release of “The Life of a Showgirl,” some Swifties have said their hunt for clues led them down a rabbit hole of speculation around whether artificial intelligence was used in a series of promotional videos for the album.The 12 videos were part of a promotional scavenger hunt released by Google, which sent fans on a search for 12 orange doors hidden across 12 cities (for her 12th album) around the world. On each of the doors was a QR code revealing video clues to the puzzle, which pieced together a phrase that fans needed to search through Google. The hunt ultimately led to the lyric video for the album’s opening track, “The Fate of Ophelia.”While deciphering the video clues, some fans online said they noticed wonky text, muddled details and objects that disappear, or shape-shift against the laws of physics. Using the hashtag #SwiftiesAgainstAI on X, they began accusing the videos of utilizing generative AI. Swift has not personally promoted the orange door campaign, and it’s unclear how involved she was in the production of the clips, which were also briefly posted to her YouTube account as Shorts. On Swift’s YouTube channel, the shorts now appear to be unavailable. Swift’s recently dropped music video for “The Fate of Ophelia,” and 12 lyric videos for the “Showgirl” album, are all still up on her channel. None use AI.A representative for Swift did not provide comment for the story. Google did not respond to a request for comment.Swift, a victim of AI deepfakes, has long expressed her support for artists’ rights to own their work, which some of her fans online said is what made them so disappointed when they saw the promotional videos, as AI systems are often trained on datasets containing copyrighted work.Some fans pointed out that Swift appeared to use hand-painted props on the set of the music video for “The Fate of Ophelia,” noting that the music icon has long been very thoughtful about the presentation of her work. “When so much effort has been put into the rest of the album rollout … I think it is very, very lazy and disappointing to use generative AI to create videos a human being very much could have done,” Rachel Lord, a self-described fan of Swift, said in a TikTok video. “I think it’s very important that we as fans talk about how much we disagree with this, because if we don’t talk about it, they’re just going to continue with it,” she said, emphasizing that she’s not “hating on Taylor.”The controversy arose amid some mixed reviews for Swift’s latest album, which topped Spotify charts and sold 2.7 million copies in its first day of release. While many have praised the upbeat bops on “The Life of a Showgirl,” others have said the tracks lack the kind of lyricism they have come to expect of Swift.Swift’s diehard fan base has traditionally come to her defense amid any backlash. The AI speculation and the subsequent criticism, however, appeared to come just as much from her fans as her detractors.In a Reddit post about the orange door promo clips in the popular r/TaylorSwift community, a moderator wrote: “The videos are most likely AI generated. We typically do not allow AI content, but given that this is somehow related to the album push we are clearly going to keep this thread going.”Several are calling on Swift — who has not commented on the AI speculation — to make a statement about the matter.“Dear @taylornation13, We expected a decent album promo but we noticed that the promotion on cities were made by A.I,” wrote one X user who describes themself as a “taywarrior” and Swiftie in their bio. The post had been viewed more than 1.3 million times as of Tuesday afternoon.“A.I has a large impact on the environment and wildlife because of its LARGE water consumption and more,” the user added. “Please learn better next time. #SwiftiesAgainstAI.”The use of AI in media production has been a polarizing subject in the entertainment industry. As generative AI tools become increasingly integrated into film, TV and music production, artists have railed against the technology due to concerns over labor displacement as well as AI companies’ scraping of human-made work without consent or compensation.Outside of vocal pushback from artists and studios, AI image, video and music generators have been hit with numerous copyright infringement lawsuits from authors, artists, news outlets, mass media companies and music labels.Some of Swift’s defenders have argued that the seemingly AI-generated quirks in the videos might be explained with computer-generated imagery. Others have insisted that CGI would not cause objects to morph, blur or disappear when the camera moves.Swift has not condemned the use of AI usage as a whole, but she has previously condemned its misuse. In a 2024 Instagram post endorsing Kamala Harris for president, Swift addressed President Donald Trump’s attempt to tout an AI image of her.“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” Swift wrote. “It really conjured up my fears around AI, and the dangers of spreading misinformation.”Angela YangAngela Yang is a culture and trends reporter for NBC News.
September 22, 2025
Erdogan’s political fate may be determined by Turkey’s Kurds
October 29, 2025
Hurricane Melissa slams into Jamaica as catastrophic Category 5 storm
October 12, 2025
Oct. 12, 2025, 6:30 AM EDTBy Jared PerloOpenAI’s new text-to-video app, Sora, was supposed to be a social AI playground, allowing users to create imaginative AI videos of themselves, friends and celebrities while building off of others’ ideas.The social structure of the app, which allows users to adjust the availability of their likeness in others’ videos, seemed to address the most pressing questions of consent around AI-generated video when it was launched last week. But as Sora sits atop the iOS App Store with over 1 million downloads, experts worry about its potential to deluge the internet with historical misinformation and deepfakes of deceased historical figures who cannot consent to or opt out of Sora’s AI models.In less than a minute, the app can generate short videos of deceased celebrities in situations they were never in: Aretha Franklin making soy candles, Carrie Fisher trying to balance on a slackline, Nat King Cole ice skating in Havana and Marilyn Monroe teaching Vietnamese to schoolchildren, for instance.That’s a nightmare for people like Adam Streisand, an attorney who has represented several celebrity estates, including Monroe’s at one point.“The challenge with AI is not the law,” Streisand said in an email, pointing out that California’s courts have long protected celebrities “from AI-like reproductions of their images or voices.”“The question is whether a non-AI judicial process that depends on human beings will ever be able to play an almost 5th dimensional game of whack-a-mole.”Videos on Sora range from the absurd to the delightful to the confusing. Aside from celebrities, many videos on Sora show convincing deepfakes of manipulated historical moments. For example, NBC News was able to generate realistic videos of President Dwight Eisenhower confessing to accepting millions of dollars in bribes, U.K. Prime Minister Margaret Thatcher arguing that the “so-called D-Day landings” were overblown, and President John F. Kennedy announcing that the moon landing was “not a triumph of science but a fabrication.”The ability to generate such deepfakes of nonconsenting deceased individuals has already caused complaints from family members.In an Instagram story posted Monday about Sora videos featuring Robin Williams, who died in 2014, Williams’ daughter Zelda wrote: “If you’ve got any decency, just stop doing this to him and to me, to everyone even, full stop. It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want.”Bernice King, Martin Luther King Jr.’s daughter, wrote on X: “I concur concerning my father. Please stop.” King’s famous “I have a dream” speech has been continuously manipulated and remixed on the app. George Carlin’s daughter said in a BlueSky post that his family was “doing our best to combat” deepfakes of the late comedian.Sora-generated videos depicting “horrific violence” involving renowned physicist Stephen Hawking have also surged in popularity this week, with many examples circulating on X.A spokesperson for OpenAI told NBC News: “While there are strong free speech interests in depicting historical figures, we believe that public figures and their families should ultimately have control over how their likeness is used. For public figures who are recently deceased, authorized representatives or owners of their estate can request that their likeness not be used in Sora cameos.”In a blog post from last Friday, OpenAI CEO Sam Altman wrote that the company would soon “give rightsholders more granular control over generation of characters,” referring to wider types of content. “We are hearing from a lot of rightsholders who are very excited for this new kind of ‘interactive fan fiction’ and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all).”OpenAI’s quickly evolving policies for Sora have led some commentators to argue the company’s move fast and break things approach was purposeful, showing users and intellectual-property holders the app’s power and reach.Liam Mayes, a lecturer at Rice University’s program in media studies, thinks increasingly realistic deepfakes could have two key societal effects. First, he said, “we’ll find trusting people falling victim to all kinds of scams, big, powerful companies exerting coercive pressures and nefarious actors undermining democratic processes,” Mayes said.At the same time, being unable to discern deepfakes from real video might reduce trust in genuine media. “We might see trust in all sorts of media establishments and institutions erode,” Mayes said.As founder and chairman of CMG Worldwide, Mark Roesler has managed the intellectual property and licensing rights for over 3,000 deceased entertainment, sports, historical and music personalities like James Dean, Neil Armstrong and Albert Einstein. Roesler said that Sora is just the latest technology to raise concerns about protecting figures’ legacies.“There is and will be abuse as there has always been with celebrities and their valuable intellectual property,” he wrote in an email. “When we began representing deceased personalities in 1981, the internet was not even in existence.”“New technology and innovation help keep the legacies of many historical, iconic personalities alive, who shaped and influenced our history,” Roesler added, saying that CMG will continue to represent its clients’ interests within AI applications like Sora.To differentiate between a real and Sora-generated video, OpenAI implemented several tools to help users and digital platforms identify Sora-created content.Each video includes invisible signals, a visible watermark and metadata — behind-the-scenes technical information that describes the content as AI-generated.Yet several of these layers are easily removable, said Sid Srinivasan, a computer scientist at Harvard University. “Visible watermarks and metadata will deter casual misuse through some friction, but they are easy enough to remove and won’t stop more determined actors.”Srinivasan said an invisible watermark and an associated detection tool would likely be the most reliable approach. “Ultimately, video-hosting platforms will likely need access to detection tools like this, and there’s no clear timeline for wider access to such internal tools.”Wenting Zheng, an assistant professor of computer science at Carnegie Mellon University, echoed that view, saying: “To automatically detect AI-generated materials on social media posts, it would be beneficial for OpenAI to share their tool for tracing images, audio and videos with the platforms to assist people in identifying AI-generated content.”When asked for specifics about whether OpenAI had shared these detection tools with other platforms like Meta or X, a spokesperson from OpenAI referred NBC News to a general technical report. The report does not provide such detailed information.To better identify genuine footage, some companies are resorting to AI to detect AI outputs, according to Ben Colman, CEO and co-founder of Reality Defender, a deepfake-detecting startup.“Human beings — even those trained on the problem, as some of our competitors are — are faulty and wrong, missing the unseeable or unhearable,” Colman said.At Reality Defender, “AI is used to detect AI,” Colman told NBC News. AI-generated “videos may get more realistic to you and I, but AI can see and hear things that we cannot.”Similarly, McAfee’s Scam Detector software “listens to a video’s audio for AI fingerprints and analyzes it to determine whether the content is authentic or AI-generated,” according to Steve Grobman, chief technology officer at McAfee.However, Grobman added, “new tools are making fake video and audio look more real all the time, and 1 in 5 people told us they or someone they know has already fallen victim to a deepfake scam.”The quality of deepfakes also differs among languages, as current AI tools in commonly used languages like English, Spanish or Mandarin are vastly more capable than tools in less commonly used languages.“We are regularly evolving the technology as new AI tools come out, and expanding beyond English so more languages and contexts are covered,” Grobman said.Concerns about deepfakes have made headlines before. Less than a year ago, many observers predicted that the 2024 elections would be overrun with deepfakes. This largely turned out not to be true.Until this year, however, AI-generated media, like images, audio and video, has largely been distinguishable from real content. Many commentators have found models released in 2025 to be particularly lifelike, threatening the public’s ability to discern real, human-created information from AI-generated content.Google’s Veo 3 video-generation model, released in May, was called “terrifyingly accurate” and “dangerously lifelike” at the time, inspiring one reviewer to ask, “Are we doomed?”Jared PerloJared Perlo is a writer and reporter at NBC News covering AI. He is currently supported by the Tarbell Center for AI Journalism.
Comments are closed.
Scroll To Top
  • Home
  • Travel
  • Culture
  • Lifestyle
  • Sport
  • Contact Us
  • Politics
© Copyright 2025 - Be That ! . All Rights Reserved