• Police seek suspects in deadly birthday party shooting
  • Lawmakers launch inquires into U.S. boat strike
  • Nov. 29, 2025, 10:07 PM EST / Updated Nov. 30, 2025,…
  • Mark Kelly says troops ‘can tell’ what orders…

Be that!

contact@bethat.ne.com

 

Be That ! Menu   ≡ ╳
  • Home
  • Travel
  • Culture
  • Lifestyle
  • Sport
  • Contact Us
  • Politics Politics
☰

Be that!

Oct. 12, 2025, 1:54 AM EDTBy Phil HelselPam Bondi vs. the Senate: Round Two. That was the scenario envisaged by “Saturday Night Live” on Saturday, with alum Amy Poehler portraying the attorney general in a follow-up to her combative hearing with Democrats this week.Asked how President Donald Trump could justify deploying National Guard troops against Americans, Poehler’s Bondi was confrontational.”Before I don’t answer, I’d like to insult you personally,” Poehler’s Bondi responded.Fellow former cast member Tina Fey made a surprise appearance as Homeland Security Secretary Kristi Noem, toting an assault-style rifle and making a pitch for applicants to become Immigration and Customs Enforcement officers that included questions like, “Do you need a job now?” and “Do you take supplements that you bought at a gas station?””Then buckle up and slap on some Oakleys, big boy: Welcome to ICE,” Fey’s Noem said.Poehler, a seven-year “SNL” cast member who left in 2008 to go on to “Parks and Recreation” fame, hosted for the third time Saturday.Her appearance came on the 50th anniversary of “Saturday Night Live,” which premiered Oct. 11, 1975.”It’s always a dream come true to be here. I remember watching the show in the ’70s, sitting in my house in Burlington, Massachusetts, thinking: ‘I want to be an actress someday — at least until they invent an AI actress who’s funnier and willing to do full-frontal,'” Poehler said in her monologue.She also had a message of hope for those who may feel overwhelmed. “If there’s a place that feels like home, that you can go back to and laugh with your friends, consider yourself lucky — and I do,” she said.And she had the last laugh against her imagined AI doppelgänger. “And to that little AI robot watching TV right now who wants to be on this stage someday, I say to you: Beep, boop, beep, boop beep beep,” Poehler said. “Which translates to: You’ll never be able to write a joke, you stupid robot! And I am willing to do full-frontal, but nobody’s asked me, OK?”Another skit had a cameo by Aubrey Plaza, a former intern and guest host on “SNL” who also starred on “Parks and Recreation.”In a parody of Netflix’s “The Hunting Wives” — introduced as “the straight but lesbian horny Republican murder drama” — Plaza played “a new new girl” who joined the group. After a sexually charged lesson in how to make a mimosa, Plaza revealed she had a girlfriend, prompting the other women to shout, “lesbian!” and immediately pull their guns on her.The reunion did not end there. A “Weekend Update” anchor trio of Seth Meyers, Fey and Poehler, who have all been behind the desk, joined current hosts Colin Jost and Michael Che for a quiz show-style battle.Role Model was Saturday’s musical guest. His performance of “Sally, When The Wine Runs Out” featured an appearance by Charli XCX. At the end of the episode, “SNL” paid tribute to Oscar-winning actor Diane Keaton, showing a portrait. Keaton died at the age of 79, her daughter said earlier Saturday.Sabrina Carpenter, who recently released the album “Man’s Best Friend,” is the host and musical guest of next week’s episode. “SNL” airs on NBC, a division of NBCUniversal, which is also the parent company of NBC News.Phil HelselPhil Helsel is a reporter for NBC News.

admin - Latest News - October 12, 2025
admin
25 views 5 secs 0 Comments




Pam Bondi vs. the Senate: Round Two.



Source link

TAGS:
PREVIOUS
Oct. 11, 2025, 7:00 AM EDTBy Aria BendixIt started with an unsubstantiated warning that taking Tylenol during pregnancy could raise a child’s risk of autism. But the message from President Donald Trump and Health Secretary Robert F. Kennedy Jr. seems to have quickly expanded to suggest that babies and young children should avoid the common painkiller.“Don’t give it to the baby when the baby’s born,” Trump said of Tylenol at a Cabinet meeting on Thursday.Kennedy jumped in to suggest that children who are circumcised have higher autism rates, “likely because they’re given Tylenol.”As the administration’s stance on the medication has broadened over the last few weeks, researchers say the notion that young children may develop autism as a result of taking Tylenol is particularly far-fetched.“There’s even less evidence that there’s a link between Tylenol in early childhood and autism than there is that Tylenol taken during pregnancy causes autism,” said David Mandell, a psychiatry professor at the University of Pennsylvania.The bulk of scientific evidence suggests moderate Tylenol use is safe in pregnancy, and many autism researchers say data does not support a causal link to autism. When it comes to young children, the American Academy of Pediatrics says Tylenol is safe when taken correctly under the guidance of a pediatrician. The medication shouldn’t be given to children younger than 12 weeks, the group says, unless a doctor recommends it, since Tylenol can mask fevers or early signs of sepsis, which require immediate medical attention.Packages of Tylenol and generic pain and fever relief medicine for sale on a shelf in a pharmacy in Houston on Sept. 23.Ronaldo Schemidt / AFP – Getty Images fileTrump and Kennedy’s first announcement about Tylenol and autism came on Sept. 22, when they unveiled regulatory actions to limit the medication’s use in pregnancy. Though Trump warned pregnant women to “fight like hell not to take it,” the actual policy changes were more subdued. The Food and Drug Administration issued a letter asking physicians to “consider minimizing the use of acetaminophen during pregnancy for routine low-grade fevers.” (Acetaminophen is the active ingredient in Tylenol.)The FDA acknowledged, however, that Tylenol is the safest over-the-counter pain reliever in pregnancy and that “a causal relationship has not been established” with autism.The agency made no mention of risks to children. Nevertheless, both Kennedy and Trump have repeated such warnings on several occasions — a significant leap from the FDA messaging.In a post on Truth Social two weeks ago, Trump wrote that young children should not take Tylenol “for virtually any reason.”Kennedy, meanwhile, doubled down on his statement about circumcision in a post on X on Friday, saying that “the observed autism correlation in circumcised boys is best explained by acetaminophen exposure.”Dr. Joshua Gordon, chair of the psychiatry department at Columbia University, said the snowballing warnings about Tylenol represent a common tactic among those looking to attribute autism to vaccines or medications.“Robert F. Kennedy and his colleagues will start with asking one question, and when the scientific community answers that question, they’ll tweak the question slightly to prolong, if you will, the debate on the topic,” Gordon said.He pointed to the way the anti-vaccine community first raised concerns about the measles, mumps and rubella vaccine in connection to autism, then pivoted to focus on a mercury-based preservative in vaccines and on the cumulative amount of vaccines administered in childhood. (Each of these concerns has been debunked.)“No amount of scientific evidence can ever be conclusive for this community,” Gordon said. “The debate is like a hydra. You cut off one head and they’re just going to try to emerge with another.”The Department of Health and Human Services did not respond to a request for comment.White House spokesperson Kush Desai said that “the President is right to express his commonsense opinion that Americans should use caution with all medications and adhere to FDA guidance, including the longstanding guidance regarding appropriate use and dosage of acetaminophen in young children.”A spokesperson for Kenvue, the maker of Tylenol, said the medication is “one of the most widely studied pain relievers and fever reducers in infants and children, and numerous randomized, controlled clinical trials support the safety of acetaminophen in infants and children when used as directed.”The spokesperson added that “independent, sound science clearly shows that taking acetaminophen does not cause autism.”Mandell said claims that Tylenol increases autism rates in babies and toddlers are based on low-quality studies that don’t prove causation.He pointed to a small study that found younger children with autism were significantly more likely to take acetaminophen for a fever compared to children without the disorder. Mandell said the study had limitations: Parents had to recall how often they gave their children acetaminophen, and children with autism are more prone to discomfort, which may lead their parents to give acetaminophen more frequently.One scientist in particular, immunologist William Parker, has fueled the theory that autism can be attributed to acetaminophen use in babies and young children. In his post on X, Kennedy cited a paper by Parker that says there is “overwhelming evidence” that acetaminophen triggers autism. But the paper hasn’t been peer-reviewed or published in a scientific journal.Kennedy also mentioned a Danish study from 2015 that concluded that boys who are circumcised may have a greater risk of developing autism. But the study authors said they couldn’t attribute the purported effect to Tylenol.Dr. Sian Jones-Jobst, a pediatrician and the president of Complete Children’s Health, a pediatric network in Lincoln, Nebraska, said very few pediatricians administer Tylenol for circumcisions; instead, the common practice is injecting a numbing medication.She added that in other situations, Tylenol is a useful tool to reduce fever or pain.“You shouldn’t let your child suffer if they’re obviously uncomfortable,” Jones-Jobst said.Aria BendixAria Bendix is the breaking health reporter for NBC News Digital.
NEXT
Oct. 12, 2025, 6:30 AM EDTBy Jared PerloOpenAI’s new text-to-video app, Sora, was supposed to be a social AI playground, allowing users to create imaginative AI videos of themselves, friends and celebrities while building off of others’ ideas.The social structure of the app, which allows users to adjust the availability of their likeness in others’ videos, seemed to address the most pressing questions of consent around AI-generated video when it was launched last week. But as Sora sits atop the iOS App Store with over 1 million downloads, experts worry about its potential to deluge the internet with historical misinformation and deepfakes of deceased historical figures who cannot consent to or opt out of Sora’s AI models.In less than a minute, the app can generate short videos of deceased celebrities in situations they were never in: Aretha Franklin making soy candles, Carrie Fisher trying to balance on a slackline, Nat King Cole ice skating in Havana and Marilyn Monroe teaching Vietnamese to schoolchildren, for instance.That’s a nightmare for people like Adam Streisand, an attorney who has represented several celebrity estates, including Monroe’s at one point.“The challenge with AI is not the law,” Streisand said in an email, pointing out that California’s courts have long protected celebrities “from AI-like reproductions of their images or voices.”“The question is whether a non-AI judicial process that depends on human beings will ever be able to play an almost 5th dimensional game of whack-a-mole.”Videos on Sora range from the absurd to the delightful to the confusing. Aside from celebrities, many videos on Sora show convincing deepfakes of manipulated historical moments. For example, NBC News was able to generate realistic videos of President Dwight Eisenhower confessing to accepting millions of dollars in bribes, U.K. Prime Minister Margaret Thatcher arguing that the “so-called D-Day landings” were overblown, and President John F. Kennedy announcing that the moon landing was “not a triumph of science but a fabrication.”The ability to generate such deepfakes of nonconsenting deceased individuals has already caused complaints from family members.In an Instagram story posted Monday about Sora videos featuring Robin Williams, who died in 2014, Williams’ daughter Zelda wrote: “If you’ve got any decency, just stop doing this to him and to me, to everyone even, full stop. It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want.”Bernice King, Martin Luther King Jr.’s daughter, wrote on X: “I concur concerning my father. Please stop.” King’s famous “I have a dream” speech has been continuously manipulated and remixed on the app. George Carlin’s daughter said in a BlueSky post that his family was “doing our best to combat” deepfakes of the late comedian.Sora-generated videos depicting “horrific violence” involving renowned physicist Stephen Hawking have also surged in popularity this week, with many examples circulating on X.A spokesperson for OpenAI told NBC News: “While there are strong free speech interests in depicting historical figures, we believe that public figures and their families should ultimately have control over how their likeness is used. For public figures who are recently deceased, authorized representatives or owners of their estate can request that their likeness not be used in Sora cameos.”In a blog post from last Friday, OpenAI CEO Sam Altman wrote that the company would soon “give rightsholders more granular control over generation of characters,” referring to wider types of content. “We are hearing from a lot of rightsholders who are very excited for this new kind of ‘interactive fan fiction’ and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all).”OpenAI’s quickly evolving policies for Sora have led some commentators to argue the company’s move fast and break things approach was purposeful, showing users and intellectual-property holders the app’s power and reach.Liam Mayes, a lecturer at Rice University’s program in media studies, thinks increasingly realistic deepfakes could have two key societal effects. First, he said, “we’ll find trusting people falling victim to all kinds of scams, big, powerful companies exerting coercive pressures and nefarious actors undermining democratic processes,” Mayes said.At the same time, being unable to discern deepfakes from real video might reduce trust in genuine media. “We might see trust in all sorts of media establishments and institutions erode,” Mayes said.As founder and chairman of CMG Worldwide, Mark Roesler has managed the intellectual property and licensing rights for over 3,000 deceased entertainment, sports, historical and music personalities like James Dean, Neil Armstrong and Albert Einstein. Roesler said that Sora is just the latest technology to raise concerns about protecting figures’ legacies.“There is and will be abuse as there has always been with celebrities and their valuable intellectual property,” he wrote in an email. “When we began representing deceased personalities in 1981, the internet was not even in existence.”“New technology and innovation help keep the legacies of many historical, iconic personalities alive, who shaped and influenced our history,” Roesler added, saying that CMG will continue to represent its clients’ interests within AI applications like Sora.To differentiate between a real and Sora-generated video, OpenAI implemented several tools to help users and digital platforms identify Sora-created content.Each video includes invisible signals, a visible watermark and metadata — behind-the-scenes technical information that describes the content as AI-generated.Yet several of these layers are easily removable, said Sid Srinivasan, a computer scientist at Harvard University. “Visible watermarks and metadata will deter casual misuse through some friction, but they are easy enough to remove and won’t stop more determined actors.”Srinivasan said an invisible watermark and an associated detection tool would likely be the most reliable approach. “Ultimately, video-hosting platforms will likely need access to detection tools like this, and there’s no clear timeline for wider access to such internal tools.”Wenting Zheng, an assistant professor of computer science at Carnegie Mellon University, echoed that view, saying: “To automatically detect AI-generated materials on social media posts, it would be beneficial for OpenAI to share their tool for tracing images, audio and videos with the platforms to assist people in identifying AI-generated content.”When asked for specifics about whether OpenAI had shared these detection tools with other platforms like Meta or X, a spokesperson from OpenAI referred NBC News to a general technical report. The report does not provide such detailed information.To better identify genuine footage, some companies are resorting to AI to detect AI outputs, according to Ben Colman, CEO and co-founder of Reality Defender, a deepfake-detecting startup.“Human beings — even those trained on the problem, as some of our competitors are — are faulty and wrong, missing the unseeable or unhearable,” Colman said.At Reality Defender, “AI is used to detect AI,” Colman told NBC News. AI-generated “videos may get more realistic to you and I, but AI can see and hear things that we cannot.”Similarly, McAfee’s Scam Detector software “listens to a video’s audio for AI fingerprints and analyzes it to determine whether the content is authentic or AI-generated,” according to Steve Grobman, chief technology officer at McAfee.However, Grobman added, “new tools are making fake video and audio look more real all the time, and 1 in 5 people told us they or someone they know has already fallen victim to a deepfake scam.”The quality of deepfakes also differs among languages, as current AI tools in commonly used languages like English, Spanish or Mandarin are vastly more capable than tools in less commonly used languages.“We are regularly evolving the technology as new AI tools come out, and expanding beyond English so more languages and contexts are covered,” Grobman said.Concerns about deepfakes have made headlines before. Less than a year ago, many observers predicted that the 2024 elections would be overrun with deepfakes. This largely turned out not to be true.Until this year, however, AI-generated media, like images, audio and video, has largely been distinguishable from real content. Many commentators have found models released in 2025 to be particularly lifelike, threatening the public’s ability to discern real, human-created information from AI-generated content.Google’s Veo 3 video-generation model, released in May, was called “terrifyingly accurate” and “dangerously lifelike” at the time, inspiring one reviewer to ask, “Are we doomed?”Jared PerloJared Perlo is a writer and reporter at NBC News covering AI. He is currently supported by the Tarbell Center for AI Journalism.
Related Post
September 22, 2025
Emma Heming Willis shares family photos as daughter Mabel turns 11
October 4, 2025
Israel strikes Gaza hours after ceasefire progress
October 24, 2025
Trump faces backlash as East Wing is demolished
October 22, 2025
Parkinson’s patient plays clarinet during brain surgery
Comments are closed.
Scroll To Top
  • Home
  • Travel
  • Culture
  • Lifestyle
  • Sport
  • Contact Us
  • Politics
© Copyright 2025 - Be That ! . All Rights Reserved