This article was originally published on January 3, 2024, though the ad had been running for some time before that. It continues to run now, and we get a lot of questions about it, so we’ve updated the coverage here to share some of the progress the company has made since January. Real news is still mostly “pending” on their latest chip products, and likely will be for a while, so NVIDIA can probably hold off on being afraid of this “killer” for the time being, and you’ve probably got plenty of time to think it over… but you can make that call.
Here’s the intro to the ad… it’s a pitch for ’s entry level letter over at ($49/year, six-month refund period):
“The ‘NVIDIA KILLER’ Could Make You 120X Your Investment
“A tiny Californian company is about to dethrone Nvidia as the “King of AI.” If you get in BEFORE one critical announcement, a modest stake could turn into a fortune.
“***WARNING: You only have ONE chance to act.***”
NVIDIA (NVDA) has often been the best-performing large cap stock in the world over the past seven or eight years, and has provided investors with a return of more than 1,300% over just the past five years and 12,000% over a decade, though earning those gains would have also meant sitting through a 60% decline in both 2019 and 2022, which is very difficult for most investors… so that’s the promise, that we have that kind of gain ahead from whatever secret stock Kohl is pitching.
And the “critical announcement” and “only one chance to act” are probably mostly hooey, particularly since this ad is using data that’s a year old, and I first covered the pitch back in January… but we’ll see. Teaser ads usually have a deadline and a catalyst to get people to commit to a subscription, but most of the time that short-term urgency doesn’t mean much.
So what’s our stock? Let’s sift the clues out of the ad for you, and see what the Thinkolator can tell us… here’s how the hints begin to rain down…
“You would never know it from the outside of this building in Sunnyvale, California…
“But the engineers working in this nondescript place are on the brink of something shocking.
“They’re about to unleash the most powerful technology humanity has ever seen.”
That’s just an image of a completely generic industrial park office building in California, but we’ll see if it matches our clues later on.
And the wild promises about potential gains are mostly just the generic “AI is gonna be huge” headlines we’ve all been seeing for the past year…
“… the engineers working in this nondescript place are on the brink of something shocking.
“They’re about to unleash the most powerful technology humanity has ever seen.
“ calls this technology “the greatest profit engine in history”…
“ says, “It will change the world”…
“And Bank of America claims, “It could revolutionize everything.”
“Wired magazine predicts it will unlock $15 trillion… while one insider even believes we’re facing a stunning $100 trillion windfall.
“And investors who position themselves correctly today could easily make 10x or even more than 100x their money.
“That’s enough to turn a modest $10,000 stake into $1 million.
“Such massive gains may sound unbelievable…
“But this technology just minted the youngest self-made billionaire in the world — a 25-year-old college dropout.”
That’s probably Alexander Wang, who’s one of the founders of Scale AI, I think he beat Austin Russell at Luminar Technologies to “youngest billionaire” when his ownership of Scale AI hit that valuation in mid-2023, just six years after he dropped out of MIT. Both rode wild enthusiasm and a wash of venture capital money to “paper billionaire” status in their mid-20s, though I think Wang was a few months younger when that level was breached. Russell hit that level a couple years ago, not long after dropping out of Stanford, so the key advice here, of course, is “drop out of a top-ten college if you wanna be a billionaire early” … though that’s also the path to a rewarding career in used car sales or photocopy machine repair, so choose carefully (Russell may have retreated from ‘billionaire’ at this point, I haven’t checked the value of his Luminar shares, but I’m sure he’s still doing just fine).
So what’s the big opportunity? Kohl says that AI will change the world, but that it faces a problem which will keep it “half-baked” …
“The major flaw I’m going to reveal today lies at the heart of it all.
“If this problem isn’t solved, AI will stay half-baked for years to come.
“But the little-known company from Sunnyvale I discovered owns the patent-protected key to unlock AI’s real potential.
“I predict this tiny firm is going to be flooded with hundreds of millions and even billions of dollars…
“Because major tech companies have no other choice than to switch to this new technology. Otherwise they won’t be able to ride the $100 trillion AI wave.”
I’m guessing that Kohl here is talking about the hardware and processing logjam created by heat, which was also the hook used for an AI pitch a few months back. Rahemtulla was talking up the necessity of moving to liquid cooling to make it possible to absorb all the heat from all these high-end NVIIDA processors that make things like seem magical, but I think Kohl is taking a different tack.
Let’s move on to some more clues…
“… as you’ll see in a moment, the company I discovered is destined to dethrone Nvidia as the “King of AI.”
“And I’m not the only one who says so.
“One analyst recently confirmed on Fox News that this firm is superior to Nvidia’s AI technology.”
And a few specifics…
“… this little-known firm has a market cap of just $153 million.
“Now compare this with Nvidia, which recently crossed the $1 trillion threshold.Guoabong Stock
“In other words, this firm is about 6,500x smaller than Nvidia. That’s why the growth potential you can seize today is totally off the charts.”
And yes, Kohl’s “NVIDIA Killer” is, in fact, another chip designer…
“This company from Sunnyvale developed a patent-protected chip that blows its competitors out of the .
“You see, AI apps require almost infinite data-crunching, and this firm’s flagship chip outperforms other chips by a factor of 100 on big-data workloads.
“And that’s not all. These chips also consume far less power.
“Take Nvidia’s A100. It devours 400 watts of electricity. However, this firm’s chip runs on only 60 watts. That’s 85% less power consumption.
“This kind of performance slashes the cost of running AI systems radically and unleashes AI’s true potential.
“This firm’s customers include tech and defense giants like Lockheed Martin, Cisco, General Dynamics, Honeywell, Nokia, Raytheon, and Rockwell.
“Yet hardly any retail investors know this company’s name.
“But that’s all going to change soon because I believe this firm is about to dominate the entire AI sector.
“Like I said, this could be a replay of Nvidia’s incredible 12,035% surge….”
And Kohl has picked out a point in NVIDIA’s development when it was key to buy the shares, back when Stanford built a supercomputer using NVIDIA chips in 2013, so that’s his corollary here — he thinks this secret little stock is about to make a similar breakthrough.
So before we get to that, I’ll just caution you that NVIDIA had already been working on AI for some years at that point, and had been a solidly profitable and mostly growing leader in the graphics chip business, mostly for higher-end video games, for more than a decade. It certainly took off starting a few years later, as their self-driving car chipsets grew, the video game business continued to expand, and they began to have meaningful sales of their data acceleration chips for data centers… but it was already an established company, worth close to $10 billion at that point. And yes, back then, a decade ago, $10 billion still seemed like a lot of money (they were just a little smaller than at the time, and , which was by far the biggest semiconductor company back then, had a market cap of about $120 billion). You don’t necessarily have to start with a penny stock to get NVIDIA-like returns, though it does fuel the imagination.
So it sounds like what Kohl is talking about is what has widely been touted as the “next wave” for AI — chips that are specifically designed for various AI tasks, not just high-end “do anything” graphics powerhouse chips that are great at general AI tasks today, like the ones being snapped up from NVIDIA and .
That has generally been the evolutionary process in — new technology and new software are created by and fueled by the most powerful chips, and then once the needs are better understood, and the market becomes big enough to justify designing custom silicon, application specific integrated circuits (ASICs) are fine-tuned for the processing tasks needed, and ASICs are a lot cheaper and more efficient than the high-end chips that are used in the initial breakthrough. That’s what happened most recently with cryptocurrency mining, which used first regular CPUs, then as competition heated up a lot of mining moved to NVIDIA GPUs (because “crypto mining” really means “winning the race to calculate faster”), and then, once the market got big enough, a few companies invested in designing custom mining chips and hardware, and those took over a big chunk of the business.
Will that happen with AI? I don’t know. Probably, eventually. Certainly there’s enough money pouring into AI projects for a lot of custom ASIC designs to be funded, and there are a lot of different types of AI processing tasks, so the market might get chopped up as lots of different hardware enters the fray.
Is that what Kohl is pitching? Not exactly, we’re not talking about custom ASICs just yet, but it sounds like he has a chip company that has a different design for managing the memory bottlenecks and the high-throughput required for AI…
“… this company avoids the von Neumann bottleneck altogether.
“By computing data directly in the memory array.
“As a result, data doesn’t need to be moved around. This brilliant concept is a totally new way of processing.
“Crunching data directly where it’s stored upends how computers were designed for more than 70 years. It’s a radical upgrade to the von Neumann architecture…
“And it’s leaving competitors like Nvidia, Intel, and AMD in the dust.
“Like I said, this chip outperforms other processors by as much as 100x while consuming 85% less power.
“The best part about it?
“This company’s technology is protected by 125 patents, building an impenetrable moat around its vital technology.
“That’s why I expect this firm to be overwhelmed with orders soon. In fact, we can witness it unfolding already.
“The U.S. Air Force and Space Force secured this high-performance chip for a top-secret project…
“The Israeli Ministry of successfully tested this technology for a sophisticated object-recognition app…Pune Investment
“And Elta, a major player in the aerospace industry, wants to use it for a breakthrough image-processing AI.
“Shares of this company recently jumped 512% within a few short weeks, but it’s not too late to get in.”
And what’s that “one time opportunity” that Kohl pitches? Here’s how he puts it:
“This company is preparing the launch of a brand-new version of this chip as we speak.
“You see, the first version had a highly limited run. Only a few researchers and a couple high-end clients were granted access.
“I’m talking about organizations like the U.S. Air Force or the Weizmann Institute. This world-leading research facility used it to accelerate the search for treatments….
“The company’s CEO revealed in a recent conference call that the new version will be even faster and more energy-efficient than its predecessor.
“And when this company announces the release of the second generation, I expect it to be swamped with orders…
“And the stock to go parabolic as a result.”
Other hints? The stock recently “surged 512%” and the insiders held on to their shares, so company insiders own 31% of the company still.
And it trades for “a couple bucks.”Hyderabad Stocks
And that’s it, those are our clues… so what’s the stock? Well, if you give me a moment to pull the Thinkolator out of the garage, and get her warmed up, we’ll have some answers — it’s pretty cold up here in Massachusetts, and we’re all feeling a little slow and rusty.
But it shouldn’t take long… and this one tickled a little sensor way back in your friendly neighborhood Gumshoe’s memory banks, the stock was also teased about a dozen years ago by a now-defunct newsletter. Keith Kohl is almost certainly pitching , which was touted by Lou Basenese back in 2011, when it was a $200 million penny stock with a new memory chip design.
It still has that SRAM chip product, and they peaked at close to $100 million in revenue back in 2011, but the business has been slowly shrinking since then. And it’s now far below a $200 million valuation, and also now below Kohl’s teased $152 million market cap (it’s down to about $90 million these days).
I am generally very suspicious about these kinds of little niche tech companies that promise great things but haven’t ever done much, so I might be a little too harsh on GSIT… but this is how their performance looks if we go back to their first data filings, close to 20 years ago — that purple line is revenue, and those negative numbers at the bottom are free cash flow (orange), and net income (blue):
It’s not shocking to see a very small chip company losing money, that’s often how it goes for startups… but we’re talking about 20 years here.
Now, in their favor, we should note that they have not burned through hundreds of millions of dollars — they’ve mostly been a pretty small operation, and they have consistently had revenue coming in the door and even had a few profitable years here and there, but they haven’t fallen into the trap that we so often see for always-remain-small tech firms that can’t transform themselves or grow: They haven’t issued tons of new shares every year just to keep the lights on.
That might be changing in recent years too, however — here’s the 17-year chart of GSIT’s share price and their number of shares outstanding — they did a share buyback from 2013 to 2015, despite the fact that revenue was falling pretty precipitously back then, but this initiative to develop new products and chips for the defense and satellite industries and for AI four or five years ago started to suck up cash again, and they’ve issued shares every quarter for several years. The dilution is not overwhelming at this point, but with the share price falling pretty dramatically in recent years it could become a death spiral (that’s when you have to issue more shares to keep operating, but the share price is falling because things aren’t going well, so you keep selling more shares at lower and lower prices — we see it with biotech stocks all the time, but any small and unprofitable company can succumb).
And yes, as you can see from that spike in the purple line late in 2023, the stock did soar 512% about a year ago — it jumped from $1.45 or so up to over $9 for a brief moment, back in May and June of 2023, when anything with an AI-focused press release surged higher — but that was short-lived and didn’t get them any real opportunity to raise money at that higher valuation. The stock popped again earlier this year, but wasn’t able to hold that investor interest.
GSI Technology’s stock price is around $3.50 these days, so it’s off the lows but still far below Kohl’s $152 million teased market cap (the current valuation is about $90 million, though it has also dipped below $50 million at points this year). The ad from Kohl is undated but remains in heavy circulation right now, but one would like to charitably assume that it was written in the Summer of 2023, when the price was higher. They had a cash burn of about $14 million for all of 2023, and they have consistently said that they expect to have enough cash to last “at least a couple of years” … though they buttressed that claim by doing a sale/leaseback deal for their headquarters building last quarter, which provided a one-time gain of $5.7 million (and a phantom quarter of positive earnings)… so they are up to about $20 million in cash now, which at current burn rates might keep the lights on through the end of next year.
So… will GSIT make it to the big leagues with their latest AI chip design? Here’s what they said about themselves last year:
“Founded in 1995, GSI Technology, Inc. is a leading provider of SRAM semiconductor memory solutions. GSI’s newest products its market-leading SRAM technology. The Company recently launched radiation-hardened memory products for extreme environments and the Gemini® APU, a compute-in-memory associative processing unit designed to deliver performance advantages for diverse AI applications.
“The Gemini APU’s architecture features parallel data processing with two million-bit processors per chip. The massive in-memory processing reduces computation time from minutes to milliseconds, even nanoseconds, while significantly reducing power consumption in a scalable format. Gemini excels at large (billion item) database search applications like facial recognition, drug discovery, Elasticsearch, and object detection. Gemini is ideal for edge applications with a smaller footprint and lower power consumption, where rapid, accurate responses are critical.”
And with SRAM orders way down, this is how they pitch themselves now:
“GSI Technology is at the forefront of the AI revolution with our groundbreaking APU technology, designed for unparalleled efficiency in billion-item database searches and high-performance computing. GSI’s innovations, Gemini-I® and Gemini-II®, offer scalable, low-power, high-capacity computing solutions that redefine edge computing capabilities.
“As a leader in SRAM technology, we leverage our extensive expertise to develop radiation-hardened memory products for space and military use, ensuring exceptional speed, reliability, and performance in extreme environments. GSI Technology is not just advancing technology; we’re shaping a smarter, faster, and more efficient future.”
And they are still moving forward with the Gemini II, which they hope will be a larger volume product, and designing Gemini III, which they want to push into the data center market — though the timeframe does not seem terribly “urgent” to me, this is how they described it a year ago:
“The second milestone was the completion of the Gemini-II tape-out, which we announced last week. As a result, we are on track to have the chip back in our hands early next calendar year and expect to begin sampling the device in the second half of 2024. We are targeting Gemini-II partners and customers in low-power data center expansion and enabling data center functions at the edge. Examples of edge applications would include advanced driver assistance systems and HPC in delivery , autonomous robots, unmanned aerial vehicles, and satellites….
“In addition to advancing the tape-out of Gemini-II, a significant area of recent focus has centered around our ongoing engagements with a key hyperscale partner. I am delighted to report that these discussions are making notable progress. Through our constructive dialogues with this leading provider, we have gleaned invaluable insights into the precise design specifications required for Gemini-III to align with their requirements.
“This collaborative effort has enabled us to chart a roadmap while identifying potential partners who can bring the essential financial and engineering resources to the table for the successful development, manufacturing, and launch of Gemini-III. This evolution will leverage the incorporation of High bandwidth memory into the APU architecture, thereby harnessing the full potential of in-memory compute advantages.”
Will this Gemini chipset be meaningful for the future of AI, particularly Edge AI? I dunno, maybe. But this is still pretty early days, and there are a lot of chip designers working on AI chips for various applications, including some, like GSI’s Gemini, that are relatively low-power. Maybe their integration of memory is critical, I have no idea, but that’s not the kind of “which chip design technology is better” call I’m ever going to be able to make — I’m not a data scientist or a chip designer, so when it comes to these kinds of situations I usually have to wait and let the company’s customers tell me when their product is important. That comes in the form of orders and sales and revenue, and GSI Technology is not there yet.
When might we learn more?
Well, it will probably be pretty gradual — chip design and production is a lot slower and more iterative than most investors might assume, given the excitement over headlines. The tapeout stage is a big deal, at least for the chip designers, that’s when the computer file of the design becomes a physical thing, creating the photomask and enabling the start of actual production (by , in this case)… but it isn’t necessarily a one-step moment when production begins, they go through fabrication and testing and packaging after that, and at the moment they have produced the first Gemini II chipsets, at least in the form of putting that Gemini II chip on a board and beginning to test it.
The testing will take some time, still… first testing them internally to see if they work as expected, and correcting for the errors in the chip or their design challenges, and the next step is probably to convince a couple key customers test them in the real world. It wouldn’t be surprising if there are changes to the design before it’s ready for volume production. Maybe that happens in a few months, maybe it takes years, I have no idea. On the call six months ago, they said they will probably have benchmarking data for the chip’s performance “around summertime,” and any revenue from Gemini II would fall into their 2025 fiscal year (which started in April of 2024). That’s not here yet, as of this last quarter’s conference call (in July), they were saying that the goal is to demonstrate Gemini II capabilities to prove that up by the end of December, and hopefully both begin to get orders for Gemini II and convince some of the hyperscalers to partner with them to help fund the development of the next stage, Gemini III.
So the possible “goes parabolic” moment that Keith Kohl was promoting for Gemini II was hoped for probably this Fall (2024), but delays are the norm in this business, I expect, and the actual “tape out” for the Gemini II came about a year after it was originally expected, and is clearly not finalized and ready to test with partners yet, so it’s going to take longer. As they said on the latest call, “We have a calculation, but it’s just a calculation line. Right now, we are trying to get G-II working, so we can run the actual device. That’s what we are doing basically right now.”
They are still very hopeful that what they see as a hugely appealing chip with faster processing and lower power consumption will be in high demand for some AI applications, and hope to sell Gemini II chips and servers for Edge AI processing. And the still-in-preliminary-design Gemini III might even be a viable tool for the hottest AI projects, the ChatGPT-like large language models and similarly intensive releases, though that’s probably going to need a lot more capital and a big partner to move forward in the next few years. A lot of that is still very hypothetical, with only the Gemini I actually having finalized silicon in production and available and being sold right now.
There won’t be a big surge of orders or profit when they announce their next quarter, which will probably be in late October — revenue will probably keep drifting down, and Nokia (their biggest single customer) and various Defense customers will generate more than half of that revenue, much of which comes from their legacy SRAM chips or in the form of research awards or small government contracts that could turn into early-stage commercial projects in the future. It sounds like much of their order flow continues to come from military and satellite customers, in large part because of the radiation-hardened design of their memory products, and Gemini I is apparently designed to be radiation-tolerant, too, so perhaps that focus will continue to be important. It’s just not driving any actual revenue growth yet.
So that’s what I see, dear friends — a company with a design for AI chips that they’re really excited about, but this is a small-scale chip designer that’s been around for a very long time without ever breaking through to high-volume sales, and it’s still not necessarily ready for prime time yet when it comes to things like generating actual orders or revenue. In most ways, it’s really a start-up that has some lingering, if shrinking, business in selling memory to the military — they transitioned to focusing on AI and their Gemini program in 2019, using technology they bought in 2015, and released Gemini I in 2020, with Gemini II maybe ready for customer testing at the end of 2024, and so far it looks to me like it’s been a slow burn.
We’ll probably see press releases on finalizing the Gemini II chip as the year winds down, but for now it’s a cool-sounding design from a cash-burning company. It may well work out, I have no idea how Gemini II will look to customers a year from now, or whether Gemini III will move forward with some key hyperscale partners and actually get designed someday, let alone built… but either way, it’s likely to require some patience. I’ll let it get much more “real” before I tempt myself, but I know many of you are interested in early-stage projects, and I’m sure some of you understand the chip design and the potential appeal better than I do, so perhaps some of our favorite readers will want to take a closer look (and hopefully let us know what they find).
And I’ll hand it back over to you… think this Gemini II (or III) will be the NVIDIA killer? See other chip designers that you think have a clearer path to AI greatness? Have an opinion on GSI Technology? Let us know with a comment below. Thanks for reading!
P.S. If you’re wondering about Keith Kohl’s pitch for an “A.I. Master Key”, that’s a bit different — that’s his tease about a nuclear fuel company, we covered it most recently a few months ago (and don’t worry, you haven’t really missed anything on that one yet, either).
Article Address: http://lseshc.com/FM/119.html
Article Source:Admin88
Notice:Please indicate the source of the article in the form of a link。