A new paradigm
Complete redesign of the stack from the transistor all the way to AI applications
Up to 1000 times faster
Up to 1000 times less energy consumption
Up to 1000 times less energy consumption
EvoChip is a technology designed to take maximum advantage of features of the hardware environment. The algorithms are so efficient that even in software they offer enormous performance advantages. The IP can function in hardware only form, software form, or even in hardware accelerated software form.
Get started
Ultra-fast,
hyper-efficient,
modular,
cost-effective
AI learning engine
Our mission is for our validated technology to render Neural Network usage and GPU AI stack obsolete overnight.
No required chip development
We can use off-the-shelf FPGA hardware from mature suppliers.
Fastest learning process
Over one billion model evaluations per second on a low end FPGA.
Efficient modeling logic
We use fundamental binary logic, escaping the burden of translating standard math.
No human-loop optimization
Evolutionary engine learns without human bias, and can be bounded for safety.
Our technology
Complete redesign of the stack from the transistor all the way to AI applications.
> Modeling logic with ultra-low gate counts.
> Digital logic fabric, massively faster in silicon versus neural networks.
> Minimal power consumption and heat production. No FPU dependency.
> No matrix multiplications.
> Can be massively parallelized in hardware.
> Ultralight algorithm & modular architecture enabling easy to create chip configurations suited to a huge variety of use cases.
> Same VHDL logic can be migrated from FPGA to semi-custom or fully-custom ASIC design.
What differentiates us?
> We’ve abandoned the arithmetic world. We focus on logic gates for directly producing AI workload outputs.
> Our technology is in direct contrast to the current practice of using logic gates to perform arithmetic (ALU/FPU/GPU) and in turn using that constrained and costly system to produce AI workload outputs.
> This results in lower resource requirements (power, heat, die space, gate/transistor count, etc.) and a drastic increase in processing speed (low latency, high throughput).
> Our technology is in direct contrast to the current practice of using logic gates to perform arithmetic (ALU/FPU/GPU) and in turn using that constrained and costly system to produce AI workload outputs.
> This results in lower resource requirements (power, heat, die space, gate/transistor count, etc.) and a drastic increase in processing speed (low latency, high throughput).
TESTIMONIAL
"Based on current test results, we reduce biomarker selection from several days to a few minutes, enabling us to complete biomarker discovery for up to 10 times as many diseases per year."
Patrick Lilley, CEO, Liquid Biosciences
Learn more about our technology
Contact Us Now