• Police seek suspects in deadly birthday party shooting
  • Lawmakers launch inquires into U.S. boat strike
  • Nov. 29, 2025, 10:07 PM EST / Updated Nov. 30, 2025,…
  • Mark Kelly says troops ‘can tell’ what orders…

Be that!

contact@bethat.ne.com

 

Be That ! Menu   ≡ ╳
  • Home
  • Travel
  • Culture
  • Lifestyle
  • Sport
  • Contact Us
  • Politics Politics
☰

Be that!

Savewith a NBCUniversal ProfileCreate your free profile or log in to save this articleNov. 22, 2025, 4:06 PM ESTBy Julie Tsirkin, Gordon Lubold, Megan Shannon and Alexandra MarquezPresident Donald Trump on Saturday said that his administration’s peace proposal for Ukraine and Russia is “not my final offer,” telling reporters after a question from NBC News, “One way or the other, we have to get it ended.”Trump added that if Ukrainian President Volodymyr Zelenskyy doesn’t agree to the peace plan, “then he can continue to fight his little heart out.”Earlier this week, Trump said that he wants Zelenskyy — who has hesitations about the proposal — to accept the peace plan by Thanksgiving.Trump’s new Thanksgiving deadline for Ukraine peace plan01:35Key points of the proposal include allowing Russia to keep more Ukrainian territory than it currently holds, forcing Ukraine to limit the size of its army and agreeing that Ukraine will never join NATO.Ukrainian lawmakers have criticized the plan as conceding too much to Russia’s demands, though the Trump administration has said that the plan, which has 28 points, was drafted with input from both sides of the conflict.“Ukraine may now face a very difficult choice, either losing its dignity or the risk of losing a key partner, either the difficult 28 points, or a very difficult winter,” Zelenskyy said in a video about the plan earlier this week.Several U.S. lawmakers, including in Trump’s own party, have also expressed concerns about the plan.“While there are many good ideas in the proposed Russia-Ukraine peace plan, there are several areas that are very problematic and can be made better. The goal of any peace deal is to end the war honorably and justly — and not create new conflict,” Sen. Lindsey Graham, R-S.C., wrote in a post on X Saturday morning. Later, the South Carolina senator posted that he was confident Trump would garner a peace deal by pushing both countries and would ensure Ukraine remains free and able to defend itself from future aggression.Sen. Roger Wicker, R-Miss., wrote in his own X post on Friday that “this so-called ‘peace plan’ has real problems, and I am highly skeptical it will achieve peace.”He added, “Ukraine should not be forced to give up its lands to one of the world’s most flagrant war criminals in Vladimir Putin. The size and disposition of Ukraine’s armed forces is a sovereign choice for its government and people. And any assurances provided to Putin should not reward his malign behavior or undermine the security of the United States or allies.”The proposal includes a security guarantee with a commitment that U.S. and European allies to Ukraine would treat any future attack on Ukraine as an attack on the broader trans-Atlantic community, a U.S. official told NBC News, with few additional details about what the commitment would entail.Ukrainian leaders aren’t the only ones voicing concerns about the plan. On the sidelines of the G20 summit in South Africa, European leaders have said the proposal, if agreed to, could “leave Ukraine vulnerable to future attack.”That was a key point in a statement signed by the leaders of Britain, France, Germany, Italy, Finland, Ireland, the Netherlands, Spain and Norway.Secretary of State Marco Rubio and special envoy Steve Witkoff will travel to Geneva on Sunday to meet with a Ukrainian delegation to move peace talks forward with an eye to ending the war in Ukraine, according to two U.S. officials.A separate meeting with a Russian delegation in another location in coming days is under consideration, according to those officials.Rubio and Witkoff will join Army Secretary Dan Driscoll, who arrived earlier Saturday along with the top U.S. diplomat to Ukraine, Ambassador Julie Davis. Driscoll this past week traveled to Kyiv to meet with Zelenskyy.“Secretary Driscoll and team just landed in Geneva to work on the next steps toward achieving peace in Ukraine,” a U.S. official said.Zelenskyy confirmed the details of the meeting in a post on X, saying he’d spoken to U.K. Prime Minister Keir Starmer on Saturday.“Tomorrow, our advisers will work in Switzerland — representatives from Ukraine, the United States, and the E3 format, namely the UK, France, and Germany. The vast majority of European leaders are ready to assist and get involved. Consultations are ongoing at various levels, and the efforts of everyone who seeks a genuine and lasting peace matter,” Zelenskyy wrote.Trump made quickly ending the ongoing war in Ukraine a key promise of his 2024 campaign. So far this year, he’s met with Zelenskyy multiple times and hosted Russian President Vladimir Putin for a summit in Alaska.Russian leaders, including Putin, have praised the peace proposal, with Putin saying that if Ukraine doesn’t sign the agreement, Russia would end the war “through military means, through armed struggle.”Julie TsirkinJulie Tsirkin is a correspondent covering Capitol Hill.Gordon LuboldGordon Lubold is a national security reporter for NBC News.Megan ShannonMegan Shannon is a White House researcher for NBC NewsAlexandra MarquezAlexandra Marquez is a politics reporter for NBC News.

admin - Latest News - November 22, 2025
admin
9 views 11 secs 0 Comments




President Donald Trump said that his administration’s peace proposal for Ukraine and Russia is “not my final offer.”



Source link

TAGS:
PREVIOUS
Grizzly bear attacks school group in Canada
NEXT
Nov. 22, 2025, 6:43 AM ESTBy Yuliya TalmazanDozens of young people wave their phone flashlights and sing along with a teen as she belts out lyrics and plays her keyboard outside a subway station.It’s a scene that regularly plays out in cities around the world. But the singer in this widely shared video is now behind bars.Diana Loginova, the 18-year-old student and street musician, has emerged as an unlikely — and perhaps unwilling — voice of defiance in wartime Russia.Known by her stage name Naoko, the teen gained popularity over the summer with viral videos taken around St. Petersburg of her band Stoptime performing songs by musicians who have spoken out against Vladimir Putin’s war in Ukraine. Inevitably, in a country where nearly all forms of dissent have been crushed, Russian authorities quickly took notice.Diana Loginova sits near the courtroom before the start of a hearing on Oct. 16.Andrei Bok / SOPA Images/LightRocket via Getty ImagesNaoko was first detained last month for organizing a “mass simultaneous gathering of citizens” during a performance, which authorities said disrupted public order, and was sentenced to 13 days behind bars. She has since been rearrested twice on the same charges, as well as for petty hooliganism, and put back in prison. Her fellow band members have also served back-to-back sentences, although one has since been released.“What is happening is what we call carousel arrests,” Dmitrii Anisimov, a human rights activist and spokesperson for the OVD-Info protest monitoring group, told NBC News. “Theoretically, it can continue forever,” he said. In practice, it could mean months in detention, and there is legal precedent for this, he added.“It looks like Russian authorities want to use the persecution of Naoko, as with many other public cases, to intimidate others,” said Anisimov.Loginova’s lawyer, Maria Zyryanova, told NBC News she wouldn’t discuss the case while the singer is behind bars. Her current sentence expires Sunday.Naoko’s case has been extensively covered by Russian state news agencies and exiled independent media, while supporters have spread leaflets calling for her freedom.Aleksandr Orlov, guitarist of the street band Stoptime, in court in St. Petersburg on Nov. 11.Andrei Bok / SOPA Images/LightRocket via Getty ImagesIn an interview published in August, months before her imprisonment, Naoko said she was “scared” to be detained but felt she “had to do it.”“I understand that art is now the only language — at least in Russia — through which you can express your thoughts. I’ve chosen it and don’t want to speak any other,” she told St. Petersburg news outlet Bumaga.Others have taken up that language in Loginova’s absence.On a bench near the Kiyevskaya metro station in central Moscow, musician Vasily told NBC News that Naoko’s case had “lit a fire” in him, inspiring his own street performances as a way to support the jailed singer.“Her freedom was taken away for her singing,” said Vasily, whose last name NBC News chose not to reveal for his safety. “That got me mad.”Street musicians perform in central St. Petersburg on Oct. 27.Olga Maltseva / AFP via Getty ImagesValentina, a professional musician from the city of Yaroslavl, about 380 miles southeast of St. Petersburg, has been singing on both the streets and social media in support of Naoko.Inspired after seeing Naoko’s performances on TikTok, she has been posting videos where she performs the same songs. One gained more than 600,000 views on Instagram, which scared her because she did not want to get on authorities’ radar, said Valentina, who did not want her last name revealed for fear of repercussions. “When I saw the news about Naoko, it felt like my last hope was taken away,” she said. “I did not feel sorry for myself. I just really wanted to help. I thought, ‘Why do I berate people who keep silent and don’t say anything in our country when I am also remaining silent and scared?’”Loginova is still a child, noted Vasily — himself only 19. “That’s what’s touched people, that this little girl is not afraid to get on the streets and sing the songs of foreign agents.”He’s referencing the status of exiled singer Monetochka and rapper Noize MC, both slapped with the official designation often reserved for public figures whose views have set them at odds with the Kremlin.It was a song by Noize MC, who has openly spoken out against the war and Putin’s regime, that Loginova performed before she first landed in jail.A bookshop in central St. Petersburg called Vse Svobodny, or “Everyone Is free,” on Thursday.Olga Maltseva / AFP via Getty ImagesThe rapper’s lyrics that appear to have gotten her in the most trouble appear innocuous on the surface: “I want to watch a ballet, let the swans dance.”It’s a reference to the failed 1991 coup attempt against the last Soviet leader, Mikhail Gorbachev, during which state TV showed the “Swan Lake” ballet on a continuous loop. It has since come to symbolize something dangerous in Putin’s Russia — change.A video of the band’s cover of the song, which Loginova has said they performed rarely and not for the cameras, drew the ire of war supporters who questioned why the band was allowed to perform the songs of “traitors” and whether their performances were, in fact, concealed protests.A representative for Noize MC said in an email that the rapper “prefers not to give interviews or public comments regarding this case — primarily to avoid any risk of unintentionally affecting those directly involved.”Monetochka, whose songs the band also performed, hailed them as “heroes” in a statement on social media, saying that Loginova was bringing “music and freedom” into the world rather than “violence and war.” She did not respond to NBC News’ request for comment.NBC News has reached out to Kremlin spokesman Dmitry Peskov for comment on the case.Kremlin critic Boris Nadezhdin, who was barred from running against Putin in last year’s election, said he had been in communication with Loginova’s mom, Irina, and was fundraising to cover the band’s legal costs.He has also been raising awareness on social media and said people’s emotional reactions were palpable. “She is young, she is a female, and she is not at all a politician or journalist. People are used to repressions against opposition politicians and journalists, but this is a new low,” said Nadezhdin.The people who came to listen to the band were also young, he added, a red flag for the Kremlin because of its predominantly older support base. “So they need to have an exemplary reprisal against some young singer,” he said, “so that others get fearful.”While she garners sympathy at home and abroad, Loginova remains behind bars for her singing. Nadezhdin said he was not optimistic about her chances of performing again anytime soon.“They won’t leave her alone quickly,” he said. “I am telling them to get ready for a long ride ahead.”Yuliya TalmazanYuliya Talmazan is a reporter for NBC News Digital, based in London.
Related Post
October 2, 2025
Savewith a NBCUniversal ProfileCreate your free profile or log in to save this articleOct. 2, 2025, 6:00 AM EDT / Updated Oct. 2, 2025, 8:41 AM EDTBy Jared PerloSam Altman singing in a toilet. James Bond playing Altman in high-stakes poker. Pikachu storming Normandy’s beaches. Mario jumping from his virtual world into real life.Those are just some of the lifelike videos that are rocketing through the internet a day after OpenAI released Sora, an app at the intersection of social media and artificial intelligence-powered media generation. The app surged to be the most popular app in the iOS App Store’s Photo and Video category within a day of its release.Powered by OpenAI’s upgraded Sora 2 media generation AI model, the app allows users to create high-definition videos from simple text prompts. After it processes one-time video and audio recordings of users’ likenesses, Sora allows users to embed lifelike “cameos” of themselves, their friends and others who give their permission. The app is a recipe made for virality. But many of the videos published within the first day of Sora’s debut have also raised alarm bells from copyright and deepfake experts.Users have so far reported being able to feature video game characters like Lara Croft or Nintendo heavyweights like Mario, Luigi and even Princess Peach in their AI creations. One user inserted Ronald McDonald into a saucy scene from the romantic reality TV show “Love Island.” The Wall Street Journal reported Monday that the app would enable users to feature material protected by copyright unless the copyright holders opted out of having their work appear. However, the report said, blanket opt-outs did not appear to be an option, instead requiring copyright holders to submit examples of offending content.Sora 2 builds on OpenAI’s original Sora model, which was released to the public in December. Unlike the original Sora, Sora 2 now enables users to create videos with matching dialogue and sound effects.AI models ingest large swaths of information in the “training” process as they learn how to respond to users’ queries. That data forms the basis for models’ responses to future user requests. For example, Google’s Veo 3 video generation model was trained on YouTube videos, much to the dismay of some YouTube creators. OpenAI has not clearly indicated which exact data its models draw from, but the appearance of characters under copyright indicates that it used copyright-protected information to design the Sora 2 system. China’s ByteDance and its Seedance video generation model have also attracted recent copyright scrutiny.OpenAI faces legal action over copyright infringement claims, including a high-profile lawsuit featuring authors including Ta-Nehisi Coates and Jodi Picoult and newspapers like The New York Times. OpenAI competitor Anthropic recently agreed to pay $1.5 billion to settle claims from authors who alleged that Anthropic illegally downloaded and used their books to train its AI models. In an interview, Mark McKenna, a law professor and the faculty director of the UCLA Institute for Technology, Law, and Policy, drew a stark line between using copyrighted data as an input to train models and generating outputs that depict copyright-protected information.“If OpenAI is taking an aggressive approach that says they’re going to allow outputs of your copyright-protected material unless you opt out, that strikes me as not likely to work. That’s not how copyright law works. You don’t have to opt out of somebody else’s rules,” McKenna said.“The early indications show that training AI models on legitimately acquired copyright material can be considered fair use. There’s a very different question about the outputs of these systems,” he continued. “Outputting visual material is a harder copyright question than just the training of models.”As McKenna sees it, that approach is a calculated risk. “The opt-out is clearly a ‘move fast and break things’ mindset,” he said. “And the aggressive response by some of the studios is ‘No, we’re not going to go along with that.’”Disney, Warner Bros. and Sony Music Entertainment did not reply to requests for comment.In addition to copyright issues, some observers were unsettled by one of the most popular first-day creations, which depicted OpenAI CEO Sam Altman stealing valuable computer components from Target — illustrating the ease with which Sora 2 can create content depicting real people committing crimes they had not actually committed. Sora 2’s high-quality outputs arrive as some have expressed concerns about illicit or harmful creations, from worries about gory scenes and child safety to the model’s role in spreading deepfakes. OpenAI includes techniques to indicate Sora 2’s creations are AI-generated as concerns grow about the ever-blurrier line between reality and computer-generated content.Sora 2 will include moving watermarks on all videos on the Sora app or downloaded from sora.com, while invisible metadata will indicate Sora-generated videos are created by AI systems.However, the metadata can be easily removed. OpenAI’s own documentation says the metadata approach “is not a silver bullet to address issues of provenance. It can easily be removed either accidentally or intentionally,” like when users upload images to social media websites.Siwei Lyu, a professor of computer science and the director of the University of Buffalo’s Media Forensic Lab and Center for Information Integrity, agreed that multiple layers of authentication were key to prove content’s origin from Sora. “OpenAI claimed they have other responsible use measures, such as the inclusion of visible and invisible watermarks, and tracing tools for Sora-made images and audio. These complement the metadata and provide an additional layer of protection,” Lyu said.“However, their effectiveness requires additional testing. The invisible watermark and tracing tools can only be tested internally, so it is hard to judge how well they work at this point,” he added.OpenAI addressed those limitations in its technical safety report, writing that “we will continue to improve the provenance ecosystem to help bring more transparency to content created from our tools.” OpenAI did not immediately reply to a request for comment.Though the Sora app is available for download, access to Sora’s services remains invitation-only as OpenAI gradually increases access. Jared PerloJared Perlo is a writer and reporter at NBC News covering AI. He is currently supported by the Tarbell Center for AI Journalism.
November 30, 2025
Trump to airlines: Venezuela’s airspace is ‘closed in its entirety’
November 18, 2025
Savewith a NBCUniversal ProfileCreate your free profile or log in to save this articleNov. 18, 2025, 5:00 AM ESTBy Jared PerloJudge Victoria Kolakowski sensed something was wrong with Exhibit 6C.Submitted by the plaintiffs in a California housing dispute, the video showed a witness whose voice was disjointed and monotone, her face fuzzy and lacking emotion. Every few seconds, the witness would twitch and repeat her expressions.Kolakowski, who serves on California’s Alameda County Superior Court, soon realized why: The video had been produced using generative artificial intelligence. Though the video claimed to feature a real witness — who had appeared in another, authentic piece of evidence — Exhibit 6C was an AI “deepfake,” Kolakowski said.The case, Mendones v. Cushman & Wakefield, Inc., appears to be one of the first instances in which a suspected deepfake was submitted as purportedly authentic evidence in court and detected — a sign, judges and legal experts said, of a much larger threat. Citing the plaintiffs’ use of AI-generated material masquerading as real evidence, Kolakowski dismissed the case on Sept. 9. The plaintiffs sought reconsideration of her decision, arguing the judge suspected but failed to prove that the evidence was AI-generated. Judge Kolakowski denied their request for reconsideration on Nov. 6. The plaintiffs did not respond to a request for comment.With the rise of powerful AI tools, AI-generated content is increasingly finding its way into courts, and some judges are worried that hyperrealistic fake evidence will soon flood their courtrooms and threaten their fact-finding mission. NBC News spoke to five judges and 10 legal experts who warned that the rapid advances in generative AI — now capable of producing convincing fake videos, images, documents and audio — could erode the foundation of trust upon which courtrooms stand. Some judges are trying to raise awareness and calling for action around the issue, but the process is just beginning.“The judiciary in general is aware that big changes are happening and want to understand AI, but I don’t think anybody has figured out the full implications,” Kolakowski told NBC News. “We’re still dealing with a technology in its infancy.”Prior to the Mendones case, courts have repeatedly dealt with a phenomenon billed as the “Liar’s Dividend,” — when plaintiffs and defendants invoke the possibility of generative AI involvement to cast doubt on actual, authentic evidence. But in the Mendones case, the court found the plaintiffs attempted the opposite: to falsely admit AI-generated video as genuine evidence. Judge Stoney Hiljus, who serves in Minnesota’s 10th Judicial District and is chair of the Minnesota Judicial Branch’s AI Response Committee, said the case brings to the fore a growing concern among judges. “I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” he said.Many judges across the country agree, even those who advocate for the use of AI in court. Judge Scott Schlegel serves on the Fifth Circuit Court of Appeal in Louisiana and is a leading advocate for judicial adoption of AI technology, but he also worries about the risks generative AI poses to the pursuit of truth. “My wife and I have been together for over 30 years, and she has my voice everywhere,” Schlegel said. “She could easily clone my voice on free or inexpensive software to create a threatening message that sounds like it’s from me and walk into any courthouse around the country with that recording.”“The judge will sign that restraining order. They will sign every single time,” said Schlegel, referring to the hypothetical recording. “So you lose your cat, dog, guns, house, you lose everything.”Judge Erica Yew, a member of California’s Santa Clara County Superior Court since 2001, is passionate about AI’s use in the court system and its potential to increase access to justice. Yet she also acknowledged that forged audio could easily lead to a protective order and advocated for more centralized tracking of such incidents. “I am not aware of any repository where courts can report or memorialize their encounters with deep-faked evidence,” Yew told NBC News. “I think AI-generated fake or modified evidence is happening much more frequently than is reported publicly.”Yew said she is concerned that deepfakes could corrupt other, long-trusted methods of obtaining evidence in court. With AI, “someone could easily generate a false record of title and go to the county clerk’s office,” for example, to establish ownership of a car. But the county clerk likely will not have the expertise or time to check the ownership document for authenticity, Yew said, and will instead just enter the document into the official record.“Now a litigant can go get a copy of the document and bring it to court, and a judge will likely admit it. So now do I, as a judge, have to question a source of evidence that has traditionally been reliable?” Yew wondered. Though fraudulent evidence has long been an issue for the courts, Yew said AI could cause an unprecedented expansion of realistic, falsified evidence. “We’re in a whole new frontier,” Yew said.Santa, Calif., Clara County Superior Court Judge Erica Yew.Courtesy of Erica YewSchlegel and Yew are among a small group of judges leading efforts to address the emerging threat of deepfakes in court. They are joined by a consortium of the National Center for State Courts and the Thomson Reuters Institute, which has created resources for judges to address the growing deepfake quandary. The consortium labels deepfakes as “unacknowledged AI evidence” to distinguish these creations from “acknowledged AI evidence” like AI-generated accident reconstruction videos, which are recognized by all parties as AI-generated.Earlier this year, the consortium published a cheat sheet to help judges deal with deepfakes. The document advises judges to ask those providing potentially AI-generated evidence to explain its origin, reveal who had access to the evidence, share whether the evidence had been altered in any way and look for corroborating evidence. In April 2024, a Washington state judge denied a defendant’s efforts to use an AI tool to clarify a video that had been submitted. Beyond this cadre of advocates, judges around the country are starting to take note of AI’s impact on their work, according to Hiljus, the Minnesota judge.“Judges are starting to consider, is this evidence authentic? Has it been modified? Is it just plain old fake? We’ve learned over the last several months, especially with OpenAI’s Sora coming out, that it’s not very difficult to make a really realistic video of someone doing something they never did,” Hiljus said. “I hear from judges who are really concerned about it and who think that they might be seeing AI-generated evidence but don’t know quite how to approach the issue.” Hiljus is currently surveying state judges in Minnesota to better understand how generative AI is showing up in their courtrooms. To address the rise of deepfakes, several judges and legal experts are advocating for changes to judicial rules and guidelines on how attorneys verify their evidence. By law and in concert with the Supreme Court, the U.S. Congress establishes the rules for how evidence is used in lower courts.One proposal crafted by Maura R. Grossman, a research professor of computer science at the University of Waterloo and a practicing lawyer, and Paul Grimm, a professor at Duke Law School and former federal district judge, would require parties alleging that the opposition used deepfakes to thoroughly substantiate their arguments. Another proposal would transfer the duty of deepfake identification from impressionable juries to judges. The proposals were considered by the U.S. Judicial Conference’s Advisory Committee on Evidence Rules when it conferred in May, but they were not approved. Members argued “existing standards of authenticity are up to the task of regulating AI evidence.” The U.S. Judicial Conference is a voting body of 26 federal judges, overseen by the chief justice of the Supreme Court. After a committee recommends a change to judicial rules, the conference votes on the proposal, which is then reviewed by the Supreme Court and voted upon by Congress.Despite opting not to move the rule change forward for now, the committee was eager to keep a deepfake evidence rule “in the bullpen in case the Committee decides to move forward with an AI amendment in the future,” according to committee notes. Grimm was pessimistic about this decision given how quickly the AI ecosystem is evolving. By his accounting, it takes a minimum of three years for a new federal rule on evidence to be adopted.The Trump administration’s AI Action Plan, released in July as the administration’s road map for American AI efforts, highlights the need to “combat synthetic media in the court system” and advocates for exploring deepfake-specific standards similar to the proposed evidence rule changes. Yet other law practitioners think a cautionary approach is wisest, waiting to see how often deepfakes are really passed off as evidence in court and how judges react before moving to update overarching rules of evidence. Jonathan Mayer, the former chief science and technology adviser and chief AI officer at the U.S. Justice Department under President Joe Biden and now a professor at Princeton University, told NBC News he routinely encountered the issue of AI in the court system: “A recurring question was whether effectively addressing AI abuses would require new law, including new statutory authorities or court rules.”“We generally concluded that existing law was sufficient,” he said. However, “the impact of AI could change — and it could change quickly — so we also thought through and prepared for possible scenarios.”In the meantime, attorneys may become the first line of defense against deepfakes invading U.S. courtrooms. Louisiana Fifth Circuit Court of Appeal Judge Scott Schlegel.Courtesy of Scott SchlegelJudge Schlegel pointed to Louisiana’s Act 250, passed earlier this year, as a successful and effective way to change norms about deepfakes at the state level. The act mandates that attorneys exercise “reasonable diligence” to determine if evidence they or their clients submit has been generated by AI. “The courts can’t do it all by themselves,” Schlegel said. “When your client walks in the door and hands you 10 photographs, you should ask them questions. Where did you get these photographs? Did you take them on your phone or a camera?”“If it doesn’t smell right, you need to do a deeper dive before you offer that evidence into court. And if you don’t, then you’re violating your duties as an officer of the court,” he said.Daniel Garrie, co-founder of cybersecurity and digital forensics company Law & Forensics, said that human expertise will have to continue to supplement digital-only efforts. “No tool is perfect, and frequently additional facts become relevant,” Garrie wrote via email. “For example, it may be impossible for a person to have been at a certain location if GPS data shows them elsewhere at the time a photo was purportedly taken.”Metadata — or the invisible descriptive data attached to files that describe facts like the file’s origin, date of creation and date of modification — could be a key defense against deepfakes in the near future. For example, in the Mendones case, the court found the metadata of one of the purportedly-real-but-deepfaked videos showed that the plaintiffs’ video was captured on an iPhone 6, which was impossible given that the plaintiff’s argument required capabilities only available on an iPhone 15 or newer. Courts could also mandate that video- and audio-recording hardware include robust mathematical signatures attesting to the provenance and authenticity of their outputs, allowing courts to verify that content was recorded by actual cameras. Such technological solutions may still run into critical stumbling blocks similar to those that plagued prior legal efforts to adapt to new technologies, like DNA testing or even fingerprint analysis. Parties lacking the latest technical AI and deepfake know-how may face a disadvantage in proving evidence’s origin.Grossman, the University of Waterloo professor, said that for now, judges need to keep their guard up.“Anybody with a device and internet connection can take 10 or 15 seconds of your voice and have a convincing enough tape to call your bank and withdraw money. Generative AI has democratized fraud.”“We’re really moving into a new paradigm,” Grossman said. “Instead of trust but verify, we should be saying: Don’t trust and verify.”Jared PerloJared Perlo is a writer and reporter at NBC News covering AI. He is currently supported by the Tarbell Center for AI Journalism.
November 6, 2025
Trump strikes deal to lower cost of weight loss drugs
Comments are closed.
Scroll To Top
  • Home
  • Travel
  • Culture
  • Lifestyle
  • Sport
  • Contact Us
  • Politics
© Copyright 2025 - Be That ! . All Rights Reserved