AMD's Q1 revenue tanks; firm looks to AI for longer-term growth

AMD’s revenue for the first quarter of 2023 beat somewhat muted expectations, but was much lower than the same quarter last year, and led net income to plummet into negative territory. The company also issued weak second quarter guidance that triggered an after-hours stock plunge.

Q1 revenue came in at $5.4 billion, about 9% lower than the $5.88 billion posted for the same quarter in 2022. AMD also announced a Q1 GAAP net loss of $139 million, down a whopping 118% from last year’s Q1 net income of $786 million.

During the firm’s Q1 earnings call, AMD CEO Lisa Su lauded the company’s performance in a “mixed demand environment,” but acknowledged, ”In the near term, we continue to see a mixed demand environment based on the uncertainties in the macro environment.”

For the second quarter of this year, AMD expects revenue to be about $5.3 billion, plus or minus $300 million. That would represent a year-over-year decline of about 19%, according to AMD CFO and Treasurer Jean Hu. AMD’s stock price was down after the earnings release about 5% from its Tuesday closing price of $89.91.

There is reason for longer-term hope beyond the expected second quarter doldrums, and it has a lot to do with the AI market explosion that the rest of the semiconductor industry also is relying on. Su suggested that AMD will benefit from its continued push into AI, its continued development of new AI products, and the seemingly unstoppable growth of the overall AI market, which most recently has been given a boost by the popularity of generative AI applications.

“Looking longer term we have significant growth opportunities ahead based on successfully delivering our roadmaps and executing our strategic data center and embedded priorities and led by accelerating adoption of our AI products,” she said. “We are in the very early stages of the AI explosion in computing, and the rate of adoption and growth is faster than any other technology in recent history. And as the recent interest in generative AI highlighted, large language models and other AI capabilities for cloud edge endpoints require significant increases in compute performance."