Kazu Gomi has a giant view of the expertise world from his perch in Silicon Valley. And as president and CEO of NTT Analysis, a division of the massive Japanese telecommunications agency NTT, Gomi can management the R&D finances for a large chunk of the primary analysis that’s performed in Silicon Valley.
And maybe it’s no shock that Gomi is pouring some huge cash into AI for the enterprise to find new alternatives to benefit from the AI explosion. Final week, Gomi unveiled a brand new analysis effort to deal with the physics of AI and nicely as a chip design for an AI inference chip that may course of 4K video quicker. This comes on the heels of analysis tasks introduced final 12 months that would pave the best way for higher AI and extra power environment friendly information facilities.
I spoke with Gomi about this effort within the context of different issues massive firms like Nvidia are doing. Bodily AI has change into a giant deal in 2025, with Nvidia main the cost to create artificial information to pretest self-driving automobiles and humanoid robotics to allow them to get to market quicker.
And constructing on a narrative that I first did in my first tech reporting job, Gomi stated the corporate is doing analysis on photonic computing as a method to make AI computing much more power environment friendly.

Many years in the past, I toured Bell Labs and listened to the ambitions of Alan Huang as he sought to make an optical pc. Gomi’s crew is making an attempt to do one thing related a long time later. If they will pull it off, it might make information facilities function on rather a lot much less energy, as gentle doesn’t collide with different particles or generate friction the best way {that electrical} alerts do.
Throughout the occasion final week, I loved speaking to slightly desk robotic known as Jibo that swiveled and “danced” and instructed me my important indicators, like my coronary heart price, blood oxygen stage, blood stress, and even my ldl cholesterol — all by scanning my pores and skin to see the tiny palpitations and colour change because the blood moved via my cheeks. It additionally held a dialog with me by way of its AI chat functionality.
NTT has greater than 330,000 workers and $97 billion in annual income. NTT Analysis is a part of NTT, a world expertise and enterprise options supplier with an annual R&D finances of $3.6 billion. About six years in the past it created an R&D division in Silicon Valley.
Right here’s an edited transcript of our interview.

VentureBeat: Do you are feeling like there’s a theme, a prevailing theme this 12 months for what you’re speaking about in comparison with final 12 months?
Kazu Gomi: There’s no secret. We’re extra AI-heavy. AI is entrance and heart. We talked about AI final 12 months as nicely, nevertheless it’s extra vivid right this moment.
VentureBeat: I needed to listen to your opinion on what I absorbed out of CES, when Jensen Huang gave his keynote speech. He talked rather a lot about artificial information and the way this was going to speed up bodily AI. As a result of you may check your self-driving automobiles with artificial information, or check humanoid robots, a lot extra testing may be performed reliably within the digital area. They get to market a lot quicker. Do you are feeling like this is sensible, that artificial information can result in this acceleration?
Gomi: For the robots, sure, 100%. The robots and all of the bodily issues, it makes a ton of sense. AI is influencing so many different issues as nicely. Most likely not every little thing. Artificial information can’t change every little thing. However AI is impacting the best way companies run themselves. The authorized division is likely to be changed by AI. The HR division is changed by AI. These sorts of issues. In these eventualities, I’m undecided how artificial information makes a distinction. It’s not making as massive an affect as it could for issues like self-driving automobiles.
VentureBeat: It made me suppose that issues are going to return so quick, issues like humanoid robots and self-driving automobiles, that now we have to determine whether or not we actually need them, and what we would like them for.
Gomi: That’s a giant query. How do you take care of them? We’ve positively began speaking about it. How do you’re employed with them?

VentureBeat: How do you utilize them to enhance human staff, but in addition–I believe one in all your folks talked about elevating the usual of residing [for humans, not for robots].
Gomi: Proper. If you happen to do it proper, completely. There are numerous good methods to work with them. There are actually unhealthy eventualities which might be potential as nicely.
VentureBeat: If we noticed this a lot acceleration within the final 12 months or so, and we are able to anticipate artificial information will speed up it much more, what do you anticipate to occur two years from now?
Gomi: Not a lot on the artificial information per se, however right this moment, one of many press releases my crew launched is about our new analysis group, known as Physics of AI. I’m wanting ahead to the outcomes coming from this crew, in so many various methods. One of many attention-grabbing ones is that–this humanoid factor comes close to to it. However proper now we don’t know–we take AI as a black field. We don’t know precisely what’s occurring contained in the field. That’s an issue. This crew is wanting contained in the black field.
There are numerous potential advantages, however one of many intuitive ones is that if AI begins saying one thing flawed, one thing biased, clearly it’s essential make corrections. Proper now we don’t have an excellent, efficient method to appropriate it, besides to only hold saying, “That is flawed, you need to say this as an alternative of that.” There’s analysis saying that information alone gained’t save us.
VentureBeat: Does it really feel such as you’re making an attempt to show a child one thing?
Gomi: Yeah, precisely. The attention-grabbing ultimate situation–with this Physics of AI, successfully what we are able to do, there’s a mapping of information. In the long run AI is a pc program. It’s made up of neural connections, billions of neurons linked collectively. If there’s bias, it’s coming from a selected connection between neurons. If we are able to discover that, we are able to in the end scale back bias by slicing these connections. That’s the best-case situation. Everyone knows that issues aren’t that straightforward. However the crew might be able to inform that when you minimize these neurons, you may be capable to scale back bias 80% of the time, or 60%. I hope that this crew can attain one thing like that. Even 10% continues to be good.
VentureBeat: There was the AI inference chip. Are you making an attempt to outdo Nvidia? It looks as if that will be very arduous to do.

Gomi: With that individual venture, no, that’s not what we’re doing. And sure, it’s very arduous to do. Evaluating that chip to Nvidia, it’s apples and oranges. Nvidia’s GPU is extra of a general-purpose AI chip. It could possibly energy chat bots or autonomous automobiles. You are able to do all types of AI with it. This one which we launched yesterday is simply good for video and pictures, object detection and so forth. You’re not going to create a chat bot with it.
VentureBeat: Did it look like there was a chance to go after? Was one thing not likely working in that space?
Gomi: The brief reply is sure. Once more, this chip is certainly personalized for video and picture processing. The bottom line is that with out lowering the decision of the bottom picture, we are able to do inference. Excessive decision, 4K photographs, you should utilize that for inference. The profit is that–take the case of a surveillance digital camera. Perhaps it’s 500 meters away from the thing you wish to take a look at. With 4K video you may see that object fairly nicely. However with standard expertise, due to processing energy, you must scale back the decision. Perhaps you possibly can inform this was a bottle, however you couldn’t learn something on it. Perhaps you possibly can zoom in, however then you definitely lose different data from the world round it. You are able to do extra with that surveillance digital camera utilizing this expertise. Increased decision is the profit.

VentureBeat: This is likely to be unrelated, however I used to be fascinated with Nvidia’s graphics chips, the place they had been utilizing DLSS, utilizing AI to foretell the following pixel it’s essential draw. That prediction works so nicely that it bought eight instances quicker on this era. The general efficiency is now one thing like–out of 30 frames, AI may precisely predict 29 of them. Are you doing one thing related right here?
Gomi: One thing associated to that–the rationale we’re engaged on this, we had a venture that’s the precursor to this expertise. We spent a number of power and assets up to now on video codec applied sciences. We offered an early MPEG decoder for professionals, for TV station-grade cameras and issues like that. We had that base expertise. Inside this base expertise, one thing much like what you’re speaking about–there’s a little bit of object recognition occurring within the present MPEG. Between the frames, it predicts that an object is transferring from one body to the following by a lot. That’s a part of the codec expertise. Object recognition makes that occur, these predictions. That algorithm, to some extent, is used on this inference chip.
VentureBeat: One thing else Jensen was saying that was attention-grabbing–we had an structure for computing, retrieval-based computing, the place you go right into a database, fetch a solution, and are available again. Whereas with AI we now have the chance for reason-based computing. AI figures out the reply with out having to look via all this information. It could possibly say, “I do know what the reply is,” as an alternative of retrieving the reply. It may very well be a special type of computing than what we’re used to. Do you suppose that will probably be a giant change?
Gomi: I believe so. Lots of AI analysis is happening. What you stated is feasible as a result of AI has “information.” As a result of you have got that information, you don’t should go retrieve information.

VentureBeat: As a result of I do know one thing, I don’t should go to the library and look it up in a e-book.
Gomi: Precisely. I do know that such and such occasion occurred in 1868, as a result of I memorized that. You might look it up in a e-book or a database, but when you already know that, you have got that information. It’s an attention-grabbing a part of AI. Because it turns into extra clever and acquires extra information, it doesn’t have to return to the database every time.
VentureBeat: Do you have got any explicit favourite tasks occurring proper now?
Gomi: A pair. One factor I wish to spotlight, maybe, if I might decide one–you’re wanting carefully at Nvidia and people gamers. We’re placing a number of deal with photonics expertise. We’re fascinated with photonics in a few alternative ways. While you take a look at AI infrastructure–you already know all of the tales. We’ve created so many GPU clusters. They’re all interconnected. The platform is big. It requires a lot power. We’re working out of electrical energy. We’re overheating the planet. This isn’t good.
We wish to deal with this challenge with some completely different tips. Certainly one of them is utilizing photonics expertise. There are a few alternative ways. First off, the place is the bottleneck within the present AI platform? Throughout the panel right this moment, one of many panelists talked about this. While you take a look at GPUs, on common, 50% of the time a GPU is idle. There’s a lot information transport occurring between processors and reminiscence. The reminiscence and that communication line is a bottleneck. The GPU is ready for the info to be fetched and ready to write down outcomes to reminiscence. This occurs so many instances.
One concept is utilizing optics to make these communication traces a lot quicker. That’s one factor. By utilizing optics, making it quicker is one profit. One other profit is that relating to quicker clock speeds, optics is far more energy-efficient. Third, this includes a number of engineering element, however with optics you may go additional. You possibly can go this far, and even a few toes away. Rack configuration could be a lot extra versatile and fewer dense. The cooling necessities are eased.
VentureBeat: Proper now you’re extra like information heart to information heart. Right here, are we speaking about processor to reminiscence?

Gomi: Yeah, precisely. That is the evolution. Proper now it’s between information facilities. The subsequent section is between the racks, between the servers. After that’s inside the server, between the boards. After which inside the board, between the chips. Finally inside the chip, between a few completely different processing models within the core, the reminiscence cache. That’s the evolution. Nvidia has additionally launched some packaging that’s alongside the traces of this phased strategy.
VentureBeat: I began overlaying expertise round 1988, out in Dallas. I went to go to Bell Labs. On the time they had been doing photonic computing analysis. They made a number of progress, nevertheless it’s nonetheless not fairly right here, even now. It’s spanned my entire profession overlaying expertise. What’s the problem, or the issue?
Gomi: The situation I simply talked about hasn’t touched the processing unit itself, or the reminiscence itself. Solely the connection between the 2 elements, making that quicker. Clearly the following step is now we have to do one thing with the processing unit and the reminiscence itself.
VentureBeat: Extra like an optical pc?
Gomi: Sure, an actual optical pc. We’re making an attempt to try this. The factor is–it sounds such as you’ve adopted this subject for some time. However right here’s a little bit of the evolution, so to talk. Again within the day, when Bell Labs or whoever tried to create an optical-based pc, it was mainly changing the silicon-based pc one to at least one, precisely. All of the logic circuits and every little thing would run on optics. That’s arduous, and it continues to be arduous. I don’t suppose we are able to get there. Silicon photonics gained’t deal with the difficulty both.
The attention-grabbing piece is, once more, AI. For AI you don’t want very fancy computations. AI computation, the core of it’s comparatively easy. All the pieces is a factor known as matrix-vector multiplication. Data is available in, there’s a consequence, and it comes out. That’s all you do. However you must do it a billion instances. That’s why it will get sophisticated and requires a number of power and so forth. Now, the great thing about photonics is that it may do that matrix-vector multiplication by its nature.
VentureBeat: Does it contain a number of mirrors and redirection?

Gomi: Yeah, mirroring after which interference and all that stuff. To make it occur extra effectively and every little thing–in my researchers’ opinion, silicon photonics might be able to do it, nevertheless it’s arduous. You must contain completely different supplies. That’s one thing we’re engaged on. I don’t know when you’ve heard of this, nevertheless it’s lithium niobate. We use lithium niobate as an alternative of silicon. There’s a expertise to make it into a skinny movie. You are able to do these computations and multiplications on the chip. It doesn’t require any digital elements. It’s just about all performed by analog. It’s tremendous quick, tremendous energy-efficient. To some extent it mimics what’s occurring contained in the human mind.
These {hardware} researchers, their objective–a human mind works with possibly round 20 watts. ChatGPT requires 30 or 40 megawatts. We are able to use photonics expertise to have the ability to drastically upend the present AI infrastructure, if we are able to get all the best way there to an optical pc.
VentureBeat: How are you doing with the digital twin of the human coronary heart?
Gomi: We’ve made fairly good progress during the last 12 months. We created a system known as the autonomous closed-loop intervention system, ACIS. Assume you have got a affected person with coronary heart failure. With this method utilized–it’s like autonomous driving. Theoretically, with out human intervention, you may prescribe the fitting medicine and remedy to this coronary heart and produce it again to a standard state. It sounds a bit fanciful, however there’s a bio-digital twin behind it. The bio-digital twin can exactly predict the state of the center and what an injection of a given drug may do to it. It could possibly rapidly predict trigger and impact, determine on a remedy, and transfer ahead. Simulation-wise, the system works. Now we have some good proof that it’s going to work.

VentureBeat: Jibo, the robotic within the well being sales space, how shut is that to being correct? I believe it bought my ldl cholesterol flawed, nevertheless it bought every little thing else proper. Ldl cholesterol appears to be a tough one. They had been saying that was a brand new a part of what they had been doing, whereas every little thing else was extra established. If you may get that to excessive accuracy, it may very well be transformative for a way usually folks should see a physician.
Gomi: I don’t know an excessive amount of about that individual topic. The traditional method of testing that, after all, they’ve to attract blood and analyze it. I’m certain somebody is engaged on it. It’s a matter of what sort of sensor you may create. With non-invasive gadgets we are able to already learn issues like glucose ranges. That’s attention-grabbing expertise. If somebody did it for one thing like ldl cholesterol, we might carry it into Jibo and go from there.