Reading time ( words)
Tachyum™ Inc. announces its membership in the Trusted Computing Group (TCG), part of the company’s continued efforts to collaborate with other industry organizations and academic institutions to further develop and promote trusted computing technologies.
TCG is a not-for-profit organization formed to develop, define and promote open, vendor-neutral, global industry specifications and standards, supportive of a hardware-based root of trust, for interoperable trusted computing platforms. TCG’s core technologies include specifications and standards for the Trusted Platform Module (TPM), Trusted Network Communications (TNC) and network security and self-encrypting drives. TCG also has work groups to extend core concepts of trust into cloud security, virtualization and other platforms and computing services from the enterprise to the Internet of Things.
Tachyum is developing the world’s first universal processor, delivering industry-leading performance for data center, AI and HPC workloads. Prodigy is designed to replace the majority of existing chips provisioned in hyperscale data centers by allowing for a simple programming model and environment based on a coherent multiprocessor environment. Working as part of TCG ensures designers and developers of systems and related applications and software that they are fully supported when using trusted computing technologies, such as Prodigy.
“We continue to believe that the quickest and most reliable way to bring technology innovation to market is by working within a vendor community that promotes a best-practices approach to deployment,” said Dr. Radoslav Danilak, Tachyum founder and CEO. “By joining TCG, we once again commit to working and engaging with fellow members to advance the industry in a positive way. A universal processor like Prodigy offers developers a low-cost, energy-efficient way to advance solutions for the hyperscale data center, HPC environments, AI, private cloud, telecommunications, and the military and intelligence communities.”
Prodigy, the company’s 64-core flagship product, is scheduled for high-rate production in 2021. It outperforms the fastest Xeon processors at 10x lower power (core vs. core) on data center workloads, as well as outperforming NVIDIA’s fastest GPU on neural net AI training and inference. Due to its high computational density and I/O bandwidth, networks of Prodigy processors comprising just 125 HPC racks, can deliver an ExaFLOPS (a billion, billion floating point operations per second) of capacity. Prodigy’s 3X lower cost per MIPS compared to other CPU competition, coupled with its 10X processor power savings, translates to a 4X reduction in data center TCO (Annual Total Cost of Ownership: CAPEX + OPEX). Even at 50 percent Prodigy attach rates, this translates to billions of dollars per year in real savings for hyperscalers such as Google, Facebook, and Amazon.
Since Prodigy can seamlessly and dynamically switch from data center workloads to AI or HPC workloads, unused servers can be powered up, on demand, as ad hoc AI or HPC networks – CAPEX free, since the servers themselves are already purchased. Every Prodigy-provisioned data center, by definition, becomes a low-cost AI center of excellence, and a low-cost HPC system.