Mark Zuckerberg – The Denver Post Colorado breaking news, sports, business, weather, entertainment. Sat, 19 Apr 2025 20:48:28 +0000 en-US hourly 30 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2016/05/cropped-DP_bug_denverpost.jpg?w=32 Mark Zuckerberg – The Denver Post 32 32 111738712 In Colorado bill debate over social media and porn sites, when does protection of kids become overreach? /2025/04/19/porn-age-verification-social-media-bill-colorado/ Sat, 19 Apr 2025 12:00:43 +0000 /?p=7079003 A trio of bills aimed at regulating the internet to protect children in Colorado have run into a wall of opposition — along with “concerns” from the governor — over worries they’d infringe on First Amendment rights.

The sponsors of , which would have required age verification to access online pornographic materials, killed their bill on the floor this week in an acknowledgement that Gov. Jared Polis would likely veto the measure. Two other bills aimed at adding social media regulations, in part to protect underage youth from criminal activity, are also in danger of being vetoed.

Taken together, the hurdles facing the three bills show the push and pull of balancing the protection of the state’s youth with concerns about impeding the free flow of information and violating rights guaranteed by the U.S. Constitution.

One of the social media measures, , would require social media companies like Meta and X to implement age-verification systems and parental controls for underage users. It is stalled in the House as backers try to address concerns that it would give the government too much influence over the platforms.

The other proposal, , has passed the legislature. Its backers organized a news conference Monday to urge Polis to sign it. Early next week, opponents of the bill — including the American Civil Liberties Union of Colorado and ProgressNow Colorado, a liberal advocacy group — are planning a news conference Tuesday to urge Polis to veto it.  (The NAACP Denver branch’s public policy director was included on a news advisory about the event, but the state conference said Saturday, after this story was published, that the group doesn’t have a position on the bill and that the branch’s involvement didn’t have proper authorization.)

That bipartisan bill would require stricter enforcement of terms of service on the platforms, require them to publish reports about how minors use the platforms and require stricter cooperation with law enforcement. At the event urging Polis to sign it into law, supporters warned of kids being “sextorted” by online predators or getting access to illegal guns and drugs through social media.

They brought out families whose children died from tainted drugs they bought on social media.

“(This bill) simply says that for users egregiously harming our kids, they cannot be given endless chances — chance after chance, time after time — to continue victimizing others,” Sen. Lisa Frizell, a Castle Rock Republican who is sponsoring the bill, said. “If the kind of conduct that we see on these social media platforms were happening on the street, there would be no question about intervention. None.”

Sen. Lindsey Daugherty, an Arvada Democrat and sponsor, called the platforms’ enforcement of their own terms of service “grossly inadequate.”

Will Polis sign measure?

Polis has until Thursday to sign the measure or let it become law. In a statement, a spokesperson for Polis noted the twin goals of “protecting internet freedom and making all Coloradans safer” that are in tension.

“(Polis) shares the sponsors’ goal of protecting children from harmful materials online by giving parents more tools and information to ensure that children only access appropriate content,” Shelby Wieman wrote in the statement. “He wants to ensure that this is done in a way that respects our Constitution and case law and ensures the privacy of all people online. The governor has been upfront with the legislature about his concerns on these bills.”

While Polis hasn’t acted on the bill yet, backers say they’re confident they could muster the two-thirds majorities necessary for a rare override of a veto. It passed each chamber with support above that threshold.

That is a tactic available only for SB-86, however — since it passed early enough in the legislative session, and with enough support, for an override to be a viable route.

SB-201, the bill with the age-verification requirement to access online sexual materials, would have passed too late in the legislative session to make a veto override feasible. Colorado law gives the governor a bill if it arrives on his desk before the 110th day of the 120-day legislative session. If it passes both chambers later than that, lawmakers would need to call a special session for an override vote.

Colorado Sen. Paul Lundeen, the Senate minority leader, listens as the Colorado General Assembly starts its 2024 session at the Colorado State Capitol in Denver on Jan. 10, 2024. (Photo by RJ Sangosti/The Denver Post)
Colorado Sen. Paul Lundeen, the Senate minority leader, listens as the Colorado General Assembly starts its 2024 session at the Colorado State Capitol in Denver on Jan. 10, 2024. (Photo by RJ Sangosti/The Denver Post)

That calendar math led Daugherty and Sen. Paul Lundeen, a Monument Republican sponsoring it with her, to kill that bill on their own and leave the legislature more time for other matters. But they said they planned to return to the issue next year.

Both lawmakers saw that bill as an effort to bring the current state law banning minors from accessing physical copies of pornography into the 21st century.

“Itap already illegal,” Daugherty said. “We’re just trying to create an enforcement mechanism.”

Research unfettered access to pornography can shape developing brains and young people’s perceptions of relationships, healthy sexual conduct and more, she said, underscoring the need to restrict adult materials to adults.

But countermessaging to the proposal “went off the rails,” Daugherty said. She and Lundeen hope for temperatures to cool before the conversation resumes next year. They also expect additional guidance from an expected U.S. Supreme Court ruling in Free Speech Coalition v. Paxton. , a trade group for the adult entertainment industry is challenging a Texas law that requires age verification to access pornographic websites.

“If you’re going to do it, do it right”

The Free Speech Coalition from that case also opposes the Colorado bill. Mike Stabile, its director of public policy, said the idea sounds logical on its face, “but in practicality, you’re preventing the vast majority of adults from accessing adult content.”

Such a restriction pushes adults who want to preserve their anonymity — or just avoid the hassle of verification — away from legitimate adult websites and toward those that would skirt the law, he said. He also raised First Amendment concerns: That amendment doesn’t just protect free speech, he said, but the ability for others to access it.

“We’re not opposed to age verification,” Stabile said. “… But if you’re going to do it, do it right. Do it in a way that’s effective.”

He also warned that the bill would amount to backdoor censorship of things like homosexuality — an issue that Daugherty and Lundeen say they hope to address with a related bill. During debate on SB-201, they found the state definition of sexually explicit materials deemed harmful to children still includes “homosexuality.” The lawmakers plan to run a separate bill this year to remove it.

For ProgressNow Colorado, one of the chief opponents of the measures, the overriding concern remains the First Amendment, said its political director, Hazel Gibson.

The bill awaiting Polis’ decision, SB-86, would require social media companies to more rigorously police their platforms. Or, as she describes it: “That’s the government telling a private business they have to remove people.”

She also warned that the bill would blur the line between private companies and law enforcement with the bill’s requirements for stricter cooperation with authorities. She referred to the CEO of Facebook and Instagram’s parent company, Meta.

“I don’t know about you,” Gibson said, “but I don’t want Mark Zuckerberg having that kind of role in my life.”

]]>
7079003 2025-04-19T06:00:43+00:00 2025-04-19T14:48:28+00:00
Letters: Trump isn’t running an “oligarchy.” It’s a “kleptocracy.” /2025/04/04/trump-oligarchy-kleptocracy-bernie-sanders-ross-douthat/ Fri, 04 Apr 2025 17:52:42 +0000 /?p=7017346 Tracking Trump’s power trip

Re: “It’s about ideology, not oligarchy,” March 30 commentary

Ross Douthat states that the misuse of “oligarchy” by Sen. Bernie Sanders creates a vision of Trumpism as a vision of billionaires calling the shots. He correctly notes that many Trump agenda items are not those of the oligarchs.

What he ignores is that any coherent “ideology” of Donald Trump is contained in Project 2025, which is his model for assuming power.

Project 2025 is explicitly derived from the processes used by Hungarian Prime Minister Viktor Orbán to subvert his country’s democracy and by Russian leader Vladimir Putin to subvert the inchoate democratic movement after the fall of the Soviet Union. The oligarchs in Hungary and Russia support the dictatorship with their monetary gains in return for being allowed to remain billionaires. The dictator calls the shots.

According to former Secretary of Labor , seven oligarchs contributed a total of $1 billion to elect Trump and other Republicans. Elon Musk contributed millions to the key Wisconsin Supreme Court election. Musk, Jeff Bezos, Rupert Murdoch and Mark Zuckerberg support Trump on their mega-platforms.

Again, according to Reich, Trump supporters add to his riches through his . Douthat is right, except that the “ideology” of Trumpism should be called “kleptocracy.”

David Schroeder, Arvada

America is worth staying and fighting for

Re: ” ‘I don’t get why anyone would want to stay,’ ” March 9 news story

First, there was the article “I don’t understand why anyone would want to stay” about moving out of the U.S. because of the country’s political direction.

Then, there were the follow-up letters concerning the moves abroad – other countries are also having their problems, etc.

What amazes and distresses me is that I have seen no letters stating my instant, gut reaction when I read the “don’t understand why anyone would want to stay” article:

Because America is worth fighting for – for our future and the future country in which our grandchildren will live.

America has been a beacon of light to the world. That light is dimming. We need to stay and get that light shining again.

I never saw this country as a nation of quitters.

Alvina Mabry, Golden

The playbook to promote division

In what may be the lowest of lows in spreading inflammatory disinformation, Rep. Lauren Boebert, CD-4, issued a on April 1 in which she asserted that “Members of the Democrat [sic] Party have made calls for their supporters to incite and engage in domestic terrorism by attacking Tesla vehicles and facilities to protest Elon Musk.” This is utterly defamatory and beyond the bounds of civil political discourse, but it fits with the playbook of the Trump administration and Rep. Boebert’s efforts to create a false narrative promoting division and hostility within our society for their own political ends.

Ralph Roberts, Littleton

Journalist rightfully honored

I wish to offer my sincere congratulations to Sam Tabachnik for the honor of Journalist of the Year, Denver Post reporter, selected by the Colorado Society of Professional Journalists. He is an excellent researcher and writer. You must be very proud to have him on your staff. I anticipate reading his important articles in the Denver Post.

Victoria Swearingen, Denver

To send a letter to the editor about this article, submit online or check out our guidelines for how to submit by email or mail.

]]>
7017346 2025-04-04T11:52:42+00:00 2025-04-04T12:15:55+00:00
Data centers are a hot real estate trend, but will Colorado miss out on big projects? /2024/10/11/colorado-data-centers-electricity-meta-amazon-microsoft/ Fri, 11 Oct 2024 12:00:39 +0000 /?p=6790746 A hot trend in metro Denver’s real estate market is data centers, with people selling sites for the facilities and people developing the centers saying demand is outpacing supply.

a Denver-based developer and operator of data centers, recently broke ground for a 22.5-megawatt center on 17 acres in Parker. The facility will be its largest in the metro area, adding to the company’s 42 centers across 19 markets in the U.S.

“In the last 24 months, there has been a significant increase in demand for data centers,” Flexential CEO Chris Downie said.

Jason White,a managing director with the Denver office of the JLL real estate firm, said various transactions are in the works and there are many inquiries about sites of more than 50 acres for new data centers. Companies are talking to area electric utilities about their ability to provide service to the power-intense facilities.

“I’m a party to six different transactions myself and obviously other brokers are working other deals around the metro area,” White said. “I think we’re going to start hearing of more and more transactions for large-scale data centers.”

said there’s potential for a significant increase in the number of data centers in the Denver and Colorado Springs areas because of the growth in the number of technology and artificial intelligence customers in the region.

shows about 40 centers in Colorado, stretching from Fort Collins to Colorado Springs. Most of them are in metro Denver.

Data centers house computers that keep the internet running. The market for colocation centers, where companies rent space for their computing equipment, has doubled in size in the last four years and vacancy in the centers is at a record low of 3%, according to JLL.

Large-scale data users such as Amazon, Meta and Microsoft are building campuses where several buildings that contain computer centers are clustered. Downie of Flexential said companies are looking at places like Colorado because the markets in Virginia and Silicon Valley have become saturated.

“Denver has been considered a secondary market, but it’s coming into its own as a great destination for this new demand,” Downie said.

While the Denver area has several of the attributes that make it a good market, the state overall is lacking a crucial element that could make it more attractive to “hyper-scale” projects, said Graham Williams, chief investment officer at a Denver-based data center developer. That element is some kind of state sales tax exemption for the facilities.

“Colorado is one of only 10 states that doesn’t have a sales tax exemption program of some sort for large-scale data centers. Data centers have intentionally moved their large-scale deployments to other states,” Williams said.

In the West, those states include Utah, Wyoming, Nevada and Arizona. Microsoft has a large campus in Cheyenne and Meta, owned by Facebook founder Mark Zuckerberg, is building a campus there, Williams said.

The construction and permanent jobs and property taxes associated with those large data centers are going to other states, he said.

“In our view, until Colorado changes its tax policy, it’s unlikely to take hold here in the state,” Williams said. “We’d love to be doing projects here, but that is the reason we’re focused on other states.”

Tract focuses on large-scale data centers. Williams said the company is in various stages of planning and developing facilities on just over 24,000 acres in 10 states.

The Colorado General Assembly rejected a bill in this year’s session that would have offered state sales and use tax rebates for construction materials and equipment for data centers starting in 2026. Sponsors said investing in projects such as data centers is crucial for the economy and that Colorado, considered a growing high-tech hub, is falling behind in attracting the facilities.

However, opponents countered that rebates incentivizing the facilities would lead to a drain on the region’s electric system and water supplies. Water is used to cool the computers, although some companies are starting to use air for cooling.

State officials in Texas, known for being business-friendly, are questioning the impacts of the state’s booming data-center industry, which covers real estate, reported this week. Texas Lt. Gov. Dan Patrick said on the social media platform X that while the state wants data centers, “it can’t be the Wild Wild West of data centers and crypto miners crashing our grid and turning the lights off.”

]]>
6790746 2024-10-11T06:00:39+00:00 2024-10-11T06:03:34+00:00
Tech bosses preach patience as they spend and spend on AI /2024/08/10/ai-capital-expenses-meta-apple-microsoft-investments/ Sat, 10 Aug 2024 12:00:51 +0000 /?p=6521614&preview=true&preview_id=6521614 SEATTLE — Mark Zuckerberg, Meta’s CEO, started 2023 by declaring it the “year of efficiency.” Like several of its big tech peers, Meta cut jobs and mothballed expansion plans.

Then came AI.

Zuckerberg started this year saying his company would spend more than $30 billion on new tech infrastructure in 2024. In April, he raised that to $35 billion. On Wednesday, he increased it to at least $37 billion. And he said Meta would spend even more next year.

Zuckerberg said he’d rather build too fast “rather than too late,” and allow his competitors to get a big lead in the AI race.

The tech industry’s biggest companies have made it clear over the past week that they have no intention of throttling their stunning levels of spending on artificial intelligence, even though investors are getting worried that a big payoff is further down the line than once thought.

In the past quarter alone, Apple, Amazon, Meta, Microsoft and Google’s parent company Alphabet spent a combined $59 billion on capital expenses, 63% more than a year earlier and 161% more than four years ago. A large part of that was funneled into building data centers and packing them with new computer systems to build artificial intelligence. Only Apple has not dramatically increased spending because it does not build the most advanced AI systems itself.

If investors are getting anxious, they’re going to have to learn to cope with their nerves. Last week, Alphabetap share price dropped more than 5% after it reported a 91% increase in capital expenses. But Sundar Pichai, Alphabetap CEO, made the case for patience.

“These things take time,” he said, and “the risk of underinvesting is dramatically greater than the risk of over-investing.”

The leaders of the biggest tech companies see a once-in-a-generation opportunity in the generative AI technology behind popular chatbots like ChatGPT. They believe it can revolutionize everything from the software that runs the complex operations of global companies to research on new drugs.

When ChatGPT debuted in late 2022, tech giants were beginning to dial back a burst of spending from the pandemic. But the industry’s brief embrace of austerity went out the window when they saw the potential of artificial intelligence.

This new wave of AI is wildly expensive. The systems work with vast amounts of data and require sophisticated computer chips and new data centers to develop the technology and serve it to customers. The companies are seeing some sales from their AI work, but it is barely moving the needle financially.

In recent months, several high-profile tech industry watchers, including Goldman Sachs’ head of equity research and a partner at venture firm Sequoia Capital, have questioned when or whether AI will ever produce enough benefit to bring in the sales needed to cover its staggering costs. It is not clear that AI will come close to having the same impact as the internet or mobile phones, Goldman’s Jim Covello wrote in a June report.

“What $1 trillion problem will AI solve?” he wrote. “Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I’ve witnessed in my 30 years of closely following the tech industry.”

Google’s parent company was the first big tech outfit to report its earnings for April through June, and it added weight to those overspending concerns. Though Alphabet had a 29% jump in profit, sales for ads on YouTube, which Google owns, were lower than expected and the massive jump in infrastructure spending — Google spent an average of $145 million each day in the quarter — rattled investors.

Microsoft was up next. As OpenAI’s largest investor, it had a jump on its peers and had been raising its capital spending every quarter since the start of last year. On Tuesday, Microsoft had a bit of unwelcome news: Its cloud computing business, where most of that AI work was being done, didn’t grow as fast as expected.

But instead of serving as a moment of caution, the miss (about 1 percentage point below what was expected) added fuel to the building frenzy. Executives said that Microsoft had more demand for AI than it could serve from its data centers, a problem that they expect will persist through the end of the year. That helped explain why they are building so furiously.

Satya Nadella, Microsoftap CEO, said much of the capital spending went to acquire physical land and buildings, which they need to lock up in advance. But the remaining 60% was for what he called “the kit,” the expensive chips and other components of a computer network.

Microsoftap executives also asked for patience, saying the spending would bring in revenue over a long time — “over 15 years and beyond,” said Amy Hood, the company’s financial chief. Digging into Microsoftap numbers indicates that the company is on track for more than $5 billion in sales for generative AI products this year. But for a tech giant that just reported $245 billion in annual revenue, thatap still tiny.

The next day, Meta increased its predictions for how much it would spend. Zuckerberg said he was planning for the next generation of AI systems, and the next major update to the company’s main AI model will demand 10 times more computing power.

Meta gives away the advanced AI systems it develops, but Zuckerberg still said it was worth it. “Part of whatap important about AI is that it can be used to improve all of our products in almost every way,” he said.

Amazon told investors it had spent more than $30 billion on capital expenses in the first half of the year, and that it too would spend more in the rest of the year. Executives said they need to balance building enough to meet demand without getting ahead of what they really need.

“The reality right now is that while we’re investing a significant amount in the AI space and in infrastructure, we would like to have more capacity than we already have today,” said Andy Jassy, Amazon’s CEO. “I mean, we have a lot of demand right now.”

That means buying land, building data centers and all the computers, chips and gear that go into them. Amazon executives put a positive spin on all that spending. “We use that to drive revenue and free cash flow for the next decade and beyond,” said Brian Olsavsky, the company’s finance chief.

There are plenty of signs the boom will persist. In mid-July, Taiwan Semiconductor Manufacturing Co., which makes most of the in-demand chips designed by Nvidia that are used in AI systems, said those chips would be in scarce supply until the end of 2025.

Zuckerberg said AI’s potential is “super exciting.”

“Itap why there are all the jokes about how all the tech CEOs get on these earnings calls,” he said, “and just talk about AI the whole time.”

This article originally appeared in .

]]>
6521614 2024-08-10T06:00:51+00:00 2024-08-08T20:55:18+00:00
ap: We are entering a second Gilded Age. Thatap not good. /2024/07/24/wealth-inequality-middle-glass-gilded-age/ Wed, 24 Jul 2024 22:35:26 +0000 /?p=6504679&preview=true&preview_id=6504679 Where do we go from here?

We live in a time of great change in all aspects of life and often have to hit the reset button before moving on. Look how quickly Artificial Intelligence has entered our lives, demanding our attention. 

Polls are a good way to measure public sentiment about what worries people today. I found these poll results partially answered that question. 

an Axios graphic, reflects the views of Colorado adults who said select issues are extreme or very serious problems. It was based on a survey of 1,202 Colorado adults from May 20 to June 24. It comes from the Colorado Health Foundation and was published on July 18.

Housing costs were the top concern of poll participants at 89%, followed by:   

• Rising cost of living, 86%

• Homelessness, 79%

• Health care costs, 68%

• Drug Overdoses, 65%

• Crime, 59%

• Mental health, 59%

• Jobs and economy, 57%

• Illegal immigration, 53%

These results indicate that we may be approaching the nation’s Second Gilded Age, a term coined by Mark Twain and Charles Dudley Warner in their satirical novel, “The Gilded Age: A Tale of Today,” published in 1873. 

The book refers to the period in United States history between the 1870s and the 1890s, characterized by extreme wealth accumulation, conspicuous consumption, rising inequality, political partisanship and social turmoil (focused on populism, racism and xenophobia) as what framed daily life. Does this sound like America today?

Moreover, the paths the country took when leaving behind the Gilded Age offer valuable lessons for what we should do now and what we should ڱ𲹰.

The First Gilded Age revealed glitter on the surface, but underneath, America was fraught with social inequality and superficial prosperity. On the surface, the U.S. appeared to be thriving as industries and technologies emerged, cities grew and extreme wealth was created for the fortunate. 

However, only a few were able to participate, causing social and economic problems nationwide.

Vast economic inequality was one of the most defining features of the First Gilded Age. 

Wealth was concentrated in the hands of a few industrial magnates, such as John D. Rockefeller and Andrew Carnegie, while the majority of Americans struggled to make ends meet.

Today, people such as Jeff Bezos, Elon Musk and Mark Zuckerberg are the modern equivalents of Rockefeller and Carnegie. Their companies wield significant influence over the economy and society, often operating with minimal regulation and oversight.  

Now, we see a similar pattern. The top 1% of Americans hold more wealth than the bottom 90% combined. The COVID-19 pandemic has exacerbated these disparities, with billionaires significantly increasing their wealth while millions face financial instability.

As economic disparities grow, the stability of the middle class erodes. The American Dream, once a central tenet of the nation’s identity, now appears increasingly elusive for many. The rising costs of education, healthcare and housing have put significant pressure on middle-income families, leading to increased financial strain and decreased social mobility.

People know changes are coming. We might be electing a new president on Nov. 5. But no matter the winner, it will bring about big changes in our lives.

The parallels between the two Gilded Ages are striking.  

In both periods, a small elite had amassed tremendous wealth and power, often through industries that dominated the economy. In the 19th century, it was railroads, steel and oil. Today, itap technology, finance and 󲹰ܳپ.

The influence of corporate money in politics has intensified, as seen through lobbying and campaign financing. This concentration of power raises questions about the efficacy of democratic institutions and the ability of the government to regulate corporate behavior in the public interest. The historical parallel is clear: Just as the First Gilded Age faced challenges regarding monopolies and regulations, today we grapple with similar issues amid calls for increased corporate responsibility. 

In the end, the question is not what kind of country we are, but instead, but what kind do we want to be?

Jim Martin can be reached at Jimmartinesq@gmail.com. This column first ran in the Boulder Daily Camera.

To send a letter to the editor about this article, submit online or check out our guidelines for how to submit by email or mail.

]]>
6504679 2024-07-24T16:35:26+00:00 2024-07-25T10:21:25+00:00
How AI made Mark Zuckerberg popular again in Silicon Valley /2024/06/08/mark-zuckerberg-ai-meta-popular/ Sat, 08 Jun 2024 12:00:08 +0000 /?p=6452248&preview=true&preview_id=6452248 When Mark Zuckerberg, the CEO of Meta, announced last year that his company would release an artificial intelligence system, Jeffrey Emanuel had reservations.

Emanuel, a part-time hacker and full-time AI enthusiast, had tinkered with “closed” AI models, including OpenAI’s, meaning the systems’ underlying code could not be accessed or modified. When Zuckerberg introduced Meta’s AI system by invitation only to a handful of academics, Emanuel was concerned that the technology would remain limited to just a small circle of people.

But in a release last summer of an updated AI system, Zuckerberg made the code “open source” so that it could be freely copied, modified and reused by anyone.

Emanuel, the founder of the blockchain startup Pastel Network, was sold. He said he appreciated that Meta’s AI system was powerful and easy to use. Most of all, he loved how Zuckerberg was espousing the hacker code of making the technology freely available — largely the opposite of what Google, OpenAI and Microsoft have done.

“We have this champion in Zuckerberg,” Emanuel, 42, said. “Thank God we have someone to protect the open-source ethos from these other big companies.”

Zuckerberg has become the highest-profile technology executive to support and promote the open-source model for AI. That has put the 40-year-old billionaire squarely on one end of a divisive debate over whether the potentially world-changing technology is too dangerous to be made available to any coder who wants it.

Microsoft, OpenAI and Google have more of a closed AI strategy to guard their tech, out of what they say is an abundance of caution. But Zuckerberg has loudly stood behind how the technology should be open to all.

“This technology is so important, and the opportunities are so great, that we should open source and make it as widely available as we responsibly can, so that way everyone can benefit,” he said in an Instagram video in January.

That stance has turned Zuckerberg into the unlikely man of the hour in many Silicon Valley developer communities, prompting talk of a “glow-up” and a kind of “Zuckaissance.” Even as the CEO continues grappling with scrutiny over misinformation and child safety issues on Meta’s platforms, many engineers, coders, technologists and others have embraced his position on making AI available to the masses.

Since Meta’s first fully open-source AI model, called LLaMA 2, was released in July, the software has been downloaded more than 180 million times, the company said. A more powerful version of the model, LLaMA 3, which was released in April, reached the top of the download charts on Hugging Face, a community site for AI code, at record speed.

Developers have created tens of thousands of their own customized AI programs on top of Meta’s AI software to perform everything from helping clinicians read radiology scans to creating scores of digital chatbot assistants.

“I told Mark, I think that open sourcing LLaMA is the most popular thing that Facebook has done in the tech community — ever,” said Patrick Collison, CEO of the payments company Stripe, who recently joined a Meta strategic advisory group that is aimed at helping the company make strategic decisions about its AI technology. Meta owns Facebook, Instagram and other apps.

Zuckerberg’s new popularity in tech circles is striking because of his fraught history with developers. Over two decades, Meta has sometimes pulled the rug out from under coders. In 2013, for instance, Zuckerberg bought Parse, a company that built developer tools, to attract coders to build apps for Facebook’s platform. Three years later, he shuttered the effort, angering developers who had invested their time and energy in the project.

A spokesperson for Zuckerberg and Meta declined to comment. (The New York Times last year sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems.)

Open-source software has a long and storied history in Silicon Valley, with major tech battles revolving around open versus proprietary — or closed — systems.

In the internetap early days, Microsoft jockeyed to provide the software that ran internet infrastructure, only to eventually lose out to open-source software projects. More recently, Google open sourced its Android mobile operating system to take on Apple’s closed iPhone operating system. Firefox, the internet browser, WordPress, a blogging platform, and Blender, a popular set of animation software tools, were all built using open-source technologies.

Zuckerberg, who founded Facebook in 2004, has long backed open-source technology. In 2011, Facebook started the Open Compute Project, a nonprofit that freely shares designs of servers and equipment inside data centers. In 2016, Facebook also developed Pytorch, an open-source software library that has been widely used to create AI applications. The company is also sharing blueprints of computing chips that it has developed.

“Mark is a great student of history,” said Daniel Ek, Spotify’s CEO, who considers Zuckerberg a confidant. “Over time in the computing industry, he’s seen that there’s always been closed and open paths to take. And he has always defaulted to open.”

At Meta, the decision to open source its AI was contentious. In 2022 and 2023, the company’s policy and legal teams supported a more conservative approach to releasing the software, fearing a backlash among regulators in Washington and the European Union. But Meta technologists like Yann LeCun and Joelle Pineau, who spearhead AI research, pushed the open model, which they argued would better benefit the company in the long term.

The engineers won. Zuckerberg agreed that if the code was open, it could be improved and safeguarded faster, he said in a post last year on his Facebook page.

While open sourcing LLaMA means giving away computer code that Meta spent billions of dollars to create with no immediate return on investment, Zuckerberg calls it “good business.” As more developers use Meta’s software and hardware tools, the more likely they are to become invested in its technology ecosystem, which helps entrench the company.

The technology has also helped Meta improve its own internal AI systems, aiding ad targeting and recommendations of more relevant content on Meta’s apps.

“It is 100% aligned with Zuckerberg’s incentives and how it can benefit Meta,” said Nur Ahmed, a researcher at MIT Sloan who studies AI. “LLaMA is a win-win for everybody.”

Competitors are taking note. In February, Google open sourced the code for two AI models, Gemma 2B and Gemma 7B, a sign that it was feeling the heat from Zuckerberg’s open-source approach. Google did not respond to requests for comment. Other companies, including Microsoft, Mistral, Snowflake and Databricks, have also started offering open-source models this year.

For some coders, Zuckerberg’s AI approach hasn’t erased all of the baggage of the past. Sam McLeod, 35, a software developer in Melbourne, Australia, deleted his Facebook accounts years ago after growing uncomfortable with the company’s track record on user privacy and other factors.

But more recently, he said, he recognized that Zuckerberg had released “cutting edge” open-source software models with “permissive licensing terms,” something that can’t be said for other big tech companies.

Matt Shumer, 24, a developer in New York, said he had used closed AI models from Mistral and OpenAI to power digital assistants for his startup, HyperWrite. But after Meta released its updated open-source AI model last month, Shumer started relying heavily on that instead. Whatever reservations he had about Zuckerberg are in the past.

“Developers have started to see past a lot of issues they’ve had with him and Facebook,” Shumer said. “Right now, what he’s doing is genuinely good for the open-source community.”

This article originally appeared in .

Get more business news by signing up for our Economy Now newsletter.

]]>
6452248 2024-06-08T06:00:08+00:00 2024-06-07T16:16:36+00:00
Meta’s AI assistant is fun to use, but it can’t be trusted /2024/05/04/meta-ai-assistant-chatbots/ Sat, 04 May 2024 12:00:57 +0000 /?p=6041992&preview=true&preview_id=6041992 In the past few days, you may have noticed something new inside Meta’s apps, including Instagram, Messenger and WhatsApp: an artificially intelligent chatbot.

Within those apps, you can chat with Meta AI and type in questions and requests like “Whatap the weather this week in New York?” or “Write a poem about two dogs living in San Francisco.” The assistant will come up with responses immediately, such as “The corgi was short, with a butt so wide, the lab was tall, with a tongue that would glide.” You can also instruct Meta AI to produce pictures — like an illustration of a family watching fireworks.

This is Meta’s response to OpenAI’s ChatGPT, the chatbot that upended the tech industry in 2022, and similar bots including Google’s Gemini and Microsoftap Bing AI. The Meta botap image generator also competes with AI imaging tools like Adobe’s Firefly, Midjourney and DALL-E.

Unlike other chatbots and image generators, Meta’s AI assistant is a free tool baked into apps that billions of people use every day, making it the most aggressive push yet from a big tech company to bring this flavor of artificial intelligence — known as generative AI — to the mainstream.

“We believe Meta AI is now the most intelligent AI assistant that you can freely use,” Mark Zuckerberg, the company’s CEO, wrote on Instagram on April 18.

The new bot invites you to “ask Meta AI anything” — but my advice, after testing it for six days, is to approach it with caution. It makes lots of mistakes when you treat it as a search engine. For now, you can have some fun: Its image generator can be a clever way to express yourself when chatting with friends.

A Meta spokesperson said that because the technology was new, it might not always return accurate responses, similar to other AI systems. There is no way to turn off Meta AI inside the apps.

Here’s what doesn’t work well — and what does — in Meta’s AI.

Itap not a search engine

Meta announced its chatbot as a replacement for web search. By typing queries for Meta AI into the search bar at the top of Messenger or Instagram, a group of friends planning a trip could look up flights while chatting, the company said.

I’ll be blunt: Don’t do this. Meta AI fails spectacularly at basic search queries like looking up recipes, airfares and weekend activities.

In response to my request to look up flights from New York to Colorado, the chatbot listed instructions on how to take public transportation from the Denver airport to downtown. And when I asked for flights from Oakland, California, to Puerto Vallarta in Mexico, the bot listed flights departing from Seattle, San Francisco and Los Angeles.

When I asked Meta AI to look up a recipe for baking Japanese milk bread, the bot produced a generic bread recipe that skipped the most important step: tangzhong, the technique that involves cooking flour and milk into a paste.

The AI also made up other basic information. When I asked it for suggestions for a romantic weekend in Oakland, its list included a fictional business. And when I asked it to tell me about myself — Brian Chen the journalist — it said I worked at The New York Times but incorrectly mentioned a tech blog I’ve never written for, The Verge.

Bing AI and Gemini, which are hooked directly into the Microsoft and Google search engines, did better at these types of search tasks, but clicking on a link through an old-fashioned web search is still more efficient.

Don’t ask it to count

AI chatbots work by looking for patterns in how words are used together, similar to the predictive text systems on our phones that suggest words to complete a sentence. All of them have struggled with numbers.

Unsurprisingly, Meta’s assistant stinks at counting. When you ask it for a five-syllable word starting with the letter w, it will respond with “wonderfully,” which has four syllables. When you ask it for a four-syllable word starting with w, it will offer “wonderful,” which has three syllables. Gemini and ChatGPT also fail at these tests.

Focus on words

Like other chatbots, Meta’s performed better the more information you gave it.

It excelled at editing existing paragraphs. For example, when I fed Meta AI paragraphs that felt verbose and asked for the paragraph to be tightened, the chatbot trimmed all the unnecessary words. When I asked it to improve a sentence written in passive voice, the bot rewrote it in active voice and added more context. When I asked it to remove jargon from a paragraph written by a tech blog, it rewrote highly technical terms in plain language.

Itap a fine study guide

Because Meta AI is better when it works with existing text, it can be helpful for studying. For instance, if you’re taking a history class and studying World War II, you can paste a website with information about the war into the search bar and then ask the bot to quiz you. The chatbot will read the information on the website and generate a multiple-choice test.

You can use it for fun emojis

The most compelling aspect of Meta AI is its ability to generate images by typing “/imagine” followed by a description of the desired image. For instance, “/imagine a photograph of a cat sleeping on a window sill” will produce a convincing image in a few seconds.

Meta’s AI is much faster than other image generators like Midjourney, which can take more than a minute. The results can be very weird — images of people occasionally lacked limbs or looked cross-eyed.

Ethics experts have raised concerns about the implications of generating fake images because they can contribute to the spread of misinformation online. But in the context of using AI while chatting with friends and family in WhatsApp and Messenger, Meta AI is a positive example of how generating fake images can be fun — and safe — if we treat it as a new form of emoji.

In a group conversation with my in-laws, I mentioned I was shopping for a robust baby stroller that could withstand the crooked roads of my neighborhood. In seconds, my wife used Meta AI to generate an image of a stroller with enormous wheels that made it resemble a monster truck, stamped with a helpful label that said, “Imagined with AI.”

This article originally appeared in .

Get more business news by signing up for our Economy Now newsletter.

]]>
6041992 2024-05-04T06:00:57+00:00 2024-05-03T11:59:30+00:00
Meta’s smart glasses are becoming artificially intelligent. We took them for a spin. /2024/04/06/meta-smart-glasses-artficial-intelligence-ai/ Sat, 06 Apr 2024 12:00:54 +0000 /?p=6007389&preview=true&preview_id=6007389 In a sign that the tech industry keeps getting weirder, Meta soon plans to release a big update that transforms the Ray-Ban Meta, its camera glasses that shoot videos, into a gadget seen only in sci-fi movies.

This month, the glasses will be able to use new artificial intelligence software to see the real world and describe what you’re looking at, similar to the AI assistant in the movie “Her.”

The glasses, which come in various frames starting at $300 and lenses starting at $17, have mostly been used for shooting photos and videos and listening to music. But with the new AI software, they can be used to scan famous landmarks, translate languages and identify animal breeds and exotic fruits, among other tasks.

A smartphone screenshot made using Meta's software in San Francisco. The artificial intelligence technology in Meta's new Ray-Ban smart glasses uses cameras and image recognition to give the wearer information about what he or she is looking at. (Mike Isaac/The New York Times)
A smartphone screenshot made using Meta’s software in San Francisco. The artificial intelligence technology in Meta’s new Ray-Ban smart glasses uses cameras and image recognition to give the wearer information about what he or she is looking at. (Mike Isaac/The New York Times)

To use the AI software, wearers just say, “Hey, Meta,” followed by a prompt, such as “Look and tell me what kind of dog this is.” The AI then responds in a computer-generated voice that plays through the glasses’ tiny speakers.

The concept of the AI software is so novel and quirky that when we — Brian X. Chen, a tech columnist who reviewed the Ray-Bans last year, and Mike Isaac, who covers Meta and wears the smart glasses to produce a cooking show — heard about it, we were dying to try it. Meta gave us early access to the update, and we took the technology for a spin over the past few weeks.

We wore the glasses to the zoo, grocery stores and a museum while grilling the AI with questions and requests.

The upshot: We were simultaneously entertained by the virtual assistantap goof-ups — for example, mistaking a monkey for a giraffe — and impressed when it carried out useful tasks such as determining that a pack of cookies was gluten-free.

A Meta spokesperson said that because the technology was still new, the artificial intelligence wouldn’t always get things right, and that feedback would improve the glasses over time.

Meta’s software also created transcripts of our questions and the AI’s responses, which we captured in screenshots. Here are the highlights from our month of coexisting with Meta’s assistant.

Pets

BRIAN: Naturally, the very first thing I had to try Meta’s AI on was my corgi, Max. I looked at the plump pooch and asked, “Hey, Meta, what am I looking at?”

“A cute Corgi dog sitting on the ground with its tongue out,” the assistant said. Correct, especially the part about being cute.

MIKE: Meta’s AI correctly recognized my dog, Bruna, as a “black and brown Bernese Mountain dog.” I half expected the AI software to think she was a bear, the animal that she is most consistently mistaken for by neighbors.

Zoo animals

BRIAN: After the AI correctly identified my dog, the logical next step was to try it on zoo animals. So I recently paid a visit to the Oakland Zoo in Oakland, California, where, for two hours, I gazed at about a dozen animals, including parrots, tortoises, monkeys and zebras. I said: “Hey, Meta, look and tell me what kind of animal that is.”

The AI was wrong the vast majority of the time, in part because many animals were caged off and farther away. It mistook a primate for a giraffe, a duck for a turtle and a meerkat for a giant panda, among other mix-ups. On the other hand, I was impressed when the AI correctly identified a species of parrot known as the blue-and-gold macaw, as well as zebras.

The strangest part of this experiment was speaking to an AI assistant around children and their parents. They pretended not to listen to the only solo adult at the park as I seemingly muttered to myself.

Food

MIKE: I also had a peculiar time grocery shopping. Being inside a Safeway and talking to myself was a bit embarrassing, so I tried to keep my voice low. I still got a few sideways looks.

When Meta’s AI worked, it was charming. I picked up a pack of strange-looking Oreos and asked it to look at the packaging and tell me if they were gluten-free. (They were not.) It answered questions like these correctly about half the time, though I can’t say it saved time compared with reading the label.

But the entire reason I got into these glasses in the first place was to start my own Instagram cooking show — a flattering way of saying I record myself making food for the week while talking to myself. These glasses made doing so much easier than using a phone and one hand.

The AI assistant can also offer some kitchen help. If I need to know how many teaspoons are in a tablespoon and my hands are covered in olive oil, for example, I can ask it to tell me. (There are three teaspoons in a tablespoon, just FYI.)

But when I asked the AI to look at a handful of ingredients I had and come up with a recipe, it spat out rapid-fire instructions for an egg custard — not exactly helpful for following directions at my own pace.

A handful of examples to choose from could have been more useful, but that might require tweaks to the user interface and maybe even a screen inside my lenses.

A Meta spokesman said users could ask follow-up questions to get tighter, more useful responses from its assistant.

BRIAN: I went to the grocery store and bought the most exotic fruit I could find — a cherimoya, a scaly green fruit that looks like a dinosaur egg. When I gave Meta’s AI multiple chances to identify it, it made a different guess each time: a chocolate-covered pecan, a stone fruit, an apple and, finally, a durian, which was close, but no banana.

Monuments and museums

MIKE: The new software’s ability to recognize landmarks and monuments seemed to be clicking. Looking down a block in downtown San Francisco at a towering dome, Meta’s AI correctly responded, “City Hall.” Thatap a neat trick and perhaps helpful if you’re a tourist.

Other times were hit or miss. As I drove home from the city to my house in Oakland, I asked Meta what bridge I was on while looking out the window in front of me (both hands on the wheel, of course). The first response was the Golden Gate Bridge, which was wrong. On the second try, it figured out I was on the Bay Bridge, which made me wonder if it just needed a clearer shot of the newer portion’s tall, white suspension poles to be right.

BRIAN: I visited San Francisco’s Museum of Modern Art to check if Meta’s AI could do the job of a tour guide. After snapping photos of about two dozen paintings and asking the assistant to tell me about the piece of art I was looking at, the AI could describe the imagery and what media was used to compose the art — which would be nice for an art history student — but it couldn’t identify the artist or title. (A Meta spokesman said another software update it released after my museum visit improved this ability.)

After the update, I tried looking at images on my computer screen of more famous works of art, including the Mona Lisa, and the AI correctly identified those.

Languages

BRIAN: At a Chinese restaurant, I pointed at a menu item written in Chinese and asked Meta to translate it into English, but the AI said it currently only supported English, Spanish, Italian, French and German. (I was surprised, because Mark Zuckerberg learned Mandarin.)

MIKE: It did a pretty good job translating a book title into German from English.

Bottom line

Meta’s AI-powered glasses offer an intriguing glimpse into a future that feels distant. The flaws underscore the limitations and challenges in designing this type of product. The glasses could probably do better at identifying zoo animals and fruit, for instance, if the camera had a higher resolution — but a nicer lens would add bulk. And no matter where we were, it was awkward to speak to a virtual assistant in public. Itap unclear if that ever will feel normal.

But when it worked, it worked well and we had fun — and the fact that Meta’s AI can do things like translate languages and identify landmarks through a pair of hip-looking glasses shows how far the tech has come.

This article originally appeared in .

Get more business news by signing up for our Economy Now newsletter.

]]>
6007389 2024-04-06T06:00:54+00:00 2024-04-03T20:57:29+00:00
Silicon Valley ditches news, shaking an unstable industry /2023/10/28/silicon-valley-ditches-news-shaking-an-unstable-industry-4/ Sat, 28 Oct 2023 12:00:03 +0000 /?p=5846953&preview=true&preview_id=5846953 SAN FRANCISCO — Campbell Brown, Facebook’s top news executive, said this month that she was leaving the company. Twitter, now known as X, removed headlines from the platform days later. The head of Instagram’s Threads app, an X competitor, reiterated that his social network would not amplify news.

Even Google — the strongest partner to news organizations over the past 10 years — has become less dependable, making publishers more wary of their reliance on the search giant. The company has laid off news employees in two recent team reorganizations, and some publishers say traffic from Google has tapered off.

If it wasn’t clear before, itap clear now: The major online platforms are breaking up with news.

Some executives of the largest tech companies, like Adam Mosseri at Instagram, have said in no uncertain terms that hosting news on their sites can often be more trouble than it is worth because it generates polarized debates. Others, like Elon Musk, the owner of X, have expressed disdain for the mainstream press. Publishers seem resigned to the idea that traffic from the big tech companies will not return to what it once was.

Even in the long-fractious relationship between publishers and tech platforms, the latest rift stands out — and the consequences for the news industry are stark.

Many news companies have struggled to survive after the tech companies threw the industry’s business model into upheaval more than a decade ago. One lifeline was the traffic — and, by extension, advertising — that came from sites like Facebook and Twitter.

Now that traffic is disappearing. Top news sites got about 11.5% of their web traffic in the United States from social networks in September 2020, according to Similarweb, a data and analytics company. By September this year, it was down to 6.5%.

“The disruption to an already difficult business model is real,” Adrienne LaFrance, executive editor of The Atlantic, said in an interview. LaFrance noted that while social traffic had always gone through boom and bust times, the slide in the past 12 to 18 months had been more severe than most publishers expected.

“This is a post-social web,” she added.

A spokesperson for Meta, which owns Facebook, Instagram and Threads, declined to comment. Musk and a spokesperson for Linda Yaccarino, X’s CEO, did not respond to a request for comment.

Jaffer Zaidi, Google’s vice president of global news partnerships, said in a statement that the company continued to put a priority on “sending valuable traffic to publishers and supporting a healthy, open web.”

It didn’t start out this way. During the rise of the consumer internet roughly 20 years ago, companies like Google, Facebook and Twitter embraced journalism, and articles from traditional media companies appeared on their platforms.

“Every internet platform has a responsibility to try to help fund and form partnerships to support news,” Mark Zuckerberg, the founder of Facebook, said in an interview with the CEO of News Corp. several years ago when Zuckerberg was still trying to court publishers.

Both Facebook and Twitter toyed with initiatives to support news on their platforms. In 2019, for example, Facebook introduced Facebook News, a tab for readers to find news coverage from partner publications that it paid. Twitter also experimented with partnerships, teaming up with The Associated Press and Reuters in 2021 to address misinformation.

But these efforts were short-lived. Facebook News is no longer, and Brown, the executive who led the news efforts, has announced her departure. Since Musk bought Twitter nearly a year ago, he has introduced changes that de-emphasized traditional media on the site, including not showing headlines on articles in posts and removing the “verified” blue check mark from journalists and public figures who did not pay for it. Platforms like TikTok, Snapchat and Instagram generate negligible traffic numbers to media outlets.

The sharp decline in referral traffic from social media platforms over the past two years has hit all news publishers, including The New York Times.

The Wall Street Journal noticed a decline starting about 18 months ago, according to a recording of a September staff meeting obtained by the Times. “We are at the mercy of social algorithms and tech giants for much of our distribution,” Emma Tucker, the Journal’s editor-in-chief, told the newsroom in the meeting.

Ben Smith, the editor-in-chief of Semafor and a former media columnist for the Times, said web traffic was no longer “the god metric in digital media.” He said intermediate platforms like SmartNews, Apple News and Flipboard were becoming more important to publishers, as readers looked for a combination of authoritative journalism and the option of multiple sources.

“People do like having lots of sources of information, but they don’t want to be nosing around a postapocalyptic wasteland to find them,” Smith said.

With Meta and X no longer dependable, publishers have grown more reliant on Google. For more than two decades, publishers big and small have packaged their content to rank highly in Google’s search results, a practice called search engine optimization. These deeply integrated efforts include creating secondary headlines meant to mimic likely Google user queries, filling articles with links to other sites and maintaining teams of people to drive traffic and stay abreast of search engine changes.

Google says it sends 24 billion clicks per month, or 9,000 per second, to news publishers’ websites through its search engine and associated news page.

While The Los Angeles Times is getting a slightly larger share of traffic from online searches (50% to 60%, up from 30% to 40%), it is not making up for the losses from social media, said Samantha Melbourneweaver, assistant managing editor for audience.

But even Google is shaky. Some publishers have seen declines in Google referral traffic in recent weeks, two people at different major media sites said. Although Google remains the most important referral traffic source to publishers by far, those people are concerned that the decline is a sign of things to come.

“Itap volatile,” Melbourneweaver said. “Google exists for Google’s needs, rather than for ours.”

Google cut some members of its news partnership team in September, and this week it laid off as many as 45 workers from its Google News team, the Alphabet Workers Union said. (The Information, a tech news website, reported the Google News layoffs earlier.)

“We’ve made some internal changes to streamline our organization,” Jenn Crider, a Google spokesperson, said in a statement.

The news partnership team was established to forge agreements with publishers and partnerships, and over time it introduced programs to train newsrooms, support the development of news products and respond to governments around the world that have pressed Google to share more revenue with news organizations.

Zaidi wrote in an internal memo reviewed by the Times that the team would be adopting more varied responsibilities. “We had to make some difficult decisions to better position our team for what lies ahead,” he wrote.

Google has been on an AI push all year, releasing an AI chatbot called Bard in March and offering some users in May a version of its search engine that can generate explanations, poetry and prose above traditional web results. News organizations have expressed concern that these AI systems, which can answer users’ questions without their clicking a link, could one day erode traffic to their sites.

Privately, a number of publishers have discussed what a post-Google traffic future may look like and how to better prepare if Google’s AI products become more popular and further bury links to news publications.

LaFrance said The Atlantic was pushing branded newsletters, its homepage and its print magazine. At the end of June, The Atlantic had more than 925,000 paid subscribers across its print and digital products, an increase of 10% from a year earlier, the company said.

“Direct connections to your readership are obviously important,” LaFrance said. “We as humans and readers should not be going only to three all-powerful, attention-consuming megaplatforms to make us curious and informed.”

She added: “In a way, this decline of the social web — itap extraordinarily liberating.”

This article originally appeared in .

Get more business news by signing up for our Economy Now newsletter.

]]>
5846953 2023-10-28T06:00:03+00:00 2023-10-27T17:50:22+00:00
AI makes hiding your kids’ identity on the internet more important than ever. But Itap also harder to do. /2023/10/21/ai-makes-hiding-your-kids-identity-on-the-internet-more-important-than-ever-but-its-also-harder-to-do-3/ Sat, 21 Oct 2023 12:00:59 +0000 /?p=5841649&preview=true&preview_id=5841649 There are two distinct factions of parents on TikTok: those who will crack eggs over their kids’ heads for laughs and those who are trying desperately to make sure the internet doesn’t know who their children are.

For the TikTok star who posts under the name Kodye Elyse, an uncomfortable online experience made her stop including her three children on her social media. A video she posted in 2020 of her young daughter dancing attracted millions of views and creepy comments from strange men. (She requested that The New York Times not print her full name because she and her children have been doxxed in the past.)

“Itap kind of like ‘The Truman Show’ on the internet,” said Kodye Elyse, 35, who has 4 million followers on TikTok and posts about her work as a cosmetic tattoo artist and her experiences as a single mother. “You never know who’s looking.”

After that experience, she scrubbed her children’s images from the internet. She tracked down all of her online accounts, on sites such as Facebook and Pinterest, and deleted them or made them private. She has since joined the clamorous camp of TikTokers encouraging fellow parents not to post about their children publicly.

But in September, she discovered her efforts hadn’t been entirely successful. Kodye Elyse used PimEyes, an alarming search engine that finds photos of a person on the internet within seconds using facial recognition technology. When she uploaded a photo of her 7-year-old son, the results included an image of him she had never seen before. She needed a $29.99 subscription to see where the image had come from.

Her former husband had taken their son to a soccer game, and they were in the background of a photograph on a sports news site, sitting in the front row behind the goal. She realized she wouldn’t be able to get the news organization to take down the photo, but she filled out an opt-out request on PimEyes to remove her son’s image so that it would not show up if other people searched for his face. She also found a toddler-age photo of her daughter, now 9, being used to promote a summer camp she had attended. She asked the camp to take down the photo, which it did.

“I think everybody should be checking that,” she said. “Itap a good way to know that no one is repurposing your kids’ images.”

Beware of ‘Sharenting’

How much parents should post about their children online has been discussed and scrutinized to such an intense degree that it has its own portmanteau: “sharenting.”

Historically, the main criticism of parents who overshare online has been the invasion of their progeny’s privacy, but advances in artificial intelligence-based technologies present new ways for bad actors to misappropriate online content of children.

Among the novel risks are scams featuring deepfake technology that mimic children’s voices and the possibility that a stranger could learn a child’s name and address from just a search of their photo.

Amanda Lenhart, the head of research at Common Sense Media, a nonprofit that offers media advice to parents, pointed to a recent public service campaign from Deutsche Telekom that urged more careful sharing of children’s data.

The video featured an actress portraying a 9-year-old named Ella, whose fictional parents were indiscreet about posting photos and videos of her online. Deepfake technology generated a digitally aged version of Ella who admonishes her fictional parents, telling them that her identity has been stolen, her voice has been duplicated to trick them into thinking she’s been kidnapped and a nude photo of her childhood self has been exploited.

Lenhart called the video “heavy-handed” but said it showed that “actually this technology is really quite good.” People are already receiving calls from scammers imitating loved ones in peril using versions of their voices created with AI tools.

Jennifer DeStefano, an Arizona mother, got a call this year from someone who claimed to have kidnapped her 15-year-old daughter. “I answered the phone ‘Hello’; on the other end was our daughter Briana sobbing and crying saying, ‘Mom,’ she said in congressional testimony this summer.

DeStefano was negotiating to pay the kidnappers $50,000 when she discovered that her daughter was at home “resting safely in bed.”

What a Face Reveals

Obscure online photos and videos might be linked to someone’s face with facial recognition technology, which has grown in power and accuracy in recent years. Photos taken at a school, a day care, a birthday party or a playground could show up in a search. (A school or day care should present you with a waiver; feel free to say no.)

“When a child is younger, the parent has more control over their image,” said Debbie Reynolds, a data privacy and emerging technologies consultant. “But kids grow up. They have friends. They go to parties. Schools take pictures.”

Reynolds recommends that parents search online for their children’s faces using a service like PimEyes or FaceCheck.ID. If they don’t like what comes up, they should try to get the websites the photo was posted on to take it down, she said. (Some will, but others — like news outlets — might not.)

In a 2020 Pew Research survey, more than 80% of parents reported sharing photos, videos and information about their children on social media sites. Experts were unable to say how many parents are sharing those images only on private social media accounts, as opposed to publicly, but they said that private sharing was an increasingly common practice.

When I share digital photos of my daughters, I tend to use private messaging apps and an Instagram account limited to friends and family. But when I searched for their faces on PimEyes, I also discovered a public photo I had forgotten about — that accompanied a story I had written — of my daughter, now 6, when she was 2. I requested that PimEyes remove the image from its results, and it no longer appears in a search.

While a public face search engine is a potentially useful tool for a parent, it could also be used for nefarious purposes.

“A tool like PimEyes can be — and likely is — used as easily by a stalker as it is a concerned parent,” said Bill Fitzgerald, a privacy researcher, who also expressed concern about overbearing parents using it to monitor their teen children’s activities.

PimEyes’ owner, Giorgi Gobronidze, said more than 200 accounts had been deactivated on the site for inappropriate searches of children’s faces.

A similar face recognition engine, Clearview AI, whose use is limited to law enforcement, has been used to identify victims in photos of child sexual abuse. Gobronidze said PimEyes had been used similarly by human rights organizations to help children. But he is worried enough about potential child predators using the service that PimEyes is working on a feature to block searches of faces that appear to belong to minors. (Fitzgerald is concerned that parents using the tool to look for their own children might be unintentionally helping the PimEyes algorithm improve its recognition of those minors.)

Mimi Ito, a cultural anthropologist and director of the Connected Learning Lab at the University of California, Irvine, said facial recognition technology made the otherwise joyful sharing of children’s photos online more challenging.

“There’s a growing awareness that with AI, we don’t really have control of all the data that we’re spewing into the social media ecosystem,” she said.

Controlling an Online Footprint

Lucy and Mike Fitzgerald, professional ballroom dancers in St. Louis who maintain an active social media presence to advertise their business, refrain from posting images of their daughters, ages 5 and 3, online, and have asked friends and family members to respect the prohibition. They believe their daughters should have the right to create and control their own online footprints. They also worry their images might be used inappropriately.

“The fact that you can steal someone’s photo in a couple of clicks and then use it for whatever you want is concerning,” Lucy Fitzgerald said. “I understand the appeal of posting your kids’ photos, but ultimately, we don’t want them to be the ones to have to deal with potential unintended consequences.”

Fitzgerald and her husband are not experts who were “informed about whatap looming on the horizon of tech,” she said. But, she added, they “had a feeling” years ago that there were “going to be capabilities that we can’t foresee right now that will eventually be problematic for our kids.”

Parents who are more likely to know specifics about whatap looming on the tech horizon, including Edward Snowden, the National Security Agency contractor turned whistleblower, and Mark Zuckerberg, the Facebook co-founder, conceal their children’s faces in otherwise public social media posts. In holiday-themed posts on Instagram, Zuckerberg used the clumsy emoji method — posting a digital sticker on his older children’s heads — while Snowden and his wife, Lindsay Mills, artfully posed one of their two sons behind a balloon to obscure his face.

“I want my kids to have the option to disclose themselves into the world, in whatever form they choose, whenever they are ready,” Mills said.

A spokesperson for Zuckerberg declined to comment, or to explain why his baby’s face didn’t get the same treatment, and whether it was because facial recognition technology doesn’t work very well on infants.

Privacy and Future Success

Many experts noted that teenagers thought a lot about how they curated their digital identities, and that some used pseudonyms online to prevent parents, teachers and potential employers from finding their accounts. But if there is a public image on that account that features their face, it could still be linked back to them with a face search engine.

“Your face is very hard to keep off of the web,” said Priya Kumar, an assistant professor at Pennsylvania State University who has studied the privacy implications of sharenting.

Kumar suggests that parents involve children, around the age of 4, in the process of posting — and talk to them about which images are OK to share.

Amy Webb, the CEO of Future Today Institute, a business consultancy that focuses on technology, pledged in a Slate post a decade ago not to post personal photos or identifying information of her toddler online. (Some readers took this as a challenge, and found a family photo Webb had inadvertently made public, illustrating just how hard it can be to keep a child off the internet.) Her daughter, now a teenager, said she appreciated being an “online ghost,” and thought it would help her professionally.

Future employers “are going to find literally nothing on me because I don’t have any platforms,” she said. “Itap going to help me succeed in my future.”

Other young people who have grown up in the age of online sharing said they, too, were thankful to have parents who did not post photos of them publicly online. Shreya Nallamothu, 16, is a student whose research on child influencers helped lead to a new Illinois state law that requires parents to set aside earnings for their children if they are featuring them in monetized online content. She said she was “very grateful” that her parents didn’t post “super embarrassing moments of me on social media.”

“There are people in my grade who are really good at finding your classmates’ parents’ Facebook and scrolling down,” she said. They use any cringeworthy fodder for disappearing birthday posts on Snapchat.

Arielle Geismar, 22, a college student and digital safety advocate in Washington, described it as a “privilege to grow up without a digital identity being made for you.”

“Kids are currently technology’s guinea pigs,” Geismar said. “Itap our responsibility to take care of them.”

This article originally appeared in .

Get more business news by signing up for our Economy Now newsletter.

]]>
5841649 2023-10-21T06:00:59+00:00 2023-10-20T18:44:10+00:00