Summary: Tether lanza un framework de entrenamiento de IA para smartphones y GPUs de consumo

Published: 1 month and 8 days ago
Based on article from CoinTelegraph

Tether, the issuer of the world's largest stablecoin USDT, has unveiled a groundbreaking new AI training framework designed to dramatically lower the barriers to entry for artificial intelligence development. This innovative system aims to democratize access to powerful AI models by enabling their fine-tuning on ubiquitous consumer hardware, from smartphones to a diverse range of GPUs.

Democratizing AI Training with Innovative Framework

Part of its QVAC platform, Tether's new framework leverages Microsoft's BitNet architecture and LoRA (Low-Rank Adaptation) techniques. This combination is engineered to drastically reduce both the memory and computational requirements traditionally associated with training large language models (LLMs). By cutting down these resource demands, the initiative seeks to significantly decrease the cost and hardware dependencies for developers, making advanced AI model development more accessible to a wider audience. This move represents a strategic expansion for Tether, positioning it as a key player in the burgeoning field of on-device AI.

Unprecedented Accessibility and Performance

A core strength of the framework lies in its extensive cross-platform compatibility, supporting training and inference across a broad spectrum of chips, including AMD, Intel, and Apple Silicon, as well as mobile GPUs from Qualcomm and Apple – crucially, extending support beyond Nvidia's typically dominant hardware. Tether's engineers have showcased impressive results, successfully fine-tuning models with up to 1 billion parameters on smartphones in less than two hours, and even larger 13-billion parameter models on mobile devices. Built on the 1-bit BitNet architecture, the framework can reduce VRAM requirements by up to 77.8% compared to similar 16-bit models, facilitating the operation of sophisticated AI on hardware with limited resources. This capability also extends to federated learning and on-device training, promising a future where AI models can be updated and refined locally without constant reliance on centralized cloud infrastructure.

Cookies Policy - Privacy Policy - Terms of Use - © 2025 Altfins, j. s. a.