Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
0.7
TFLOPS
14
41
461
Matricardi Fabio
FM-1976
Follow
Akim92's profile picture
chibiboo's profile picture
mrs83's profile picture
22 followers
Β·
108 following
https://medium.com/@fabio.matricardi
ThePoorGpuGuy
fabiomatricardi
AI & ML interests
control system engineering, AI, LLM with python. ThePoorGPUguy on substack
Recent Activity
liked
a model
about 3 hours ago
mradermacher/Llama-3.2-1B-4B-Quad-MoE-GGUF
liked
a model
about 12 hours ago
ethicalabs/Kurtis-EON1
reacted
to
mrs83
's
post
with π₯
about 12 hours ago
In 2017, my RNNs were babbling. Today, they are hallucinating beautifully. 10 years ago, getting an LSTM to output coherent English was a struggle. 10 years later, after a "cure" based on FineWeb-EDU and a custom synthetic mix for causal conversation, the results are fascinating. We trained this on ~10B tokens on a single AMD GPU (ROCm). It is not a Transformer: Echo-DSRN (400M) is a novel recurrent architecture inspired by Hymba, RWKV, and xLSTM, designed to challenge the "Attention is All You Need" monopoly on the Edge. The ambitious goal is to build a small instruct model with RAG and tool usage capabilities (https://huggingface.co/ethicalabs/Kurtis-EON1) π The Benchmarks (Size: 400M) For a model this size (trained on <10B tokens), the specialized performance is surprising: *SciQ*: 73.8% π¦ (This rivals billion-parameter models in pure fact retrieval). *PIQA*: 62.3% (Solid physical intuition for a sub-1B model). The Reality Check: HellaSwag (29.3%) and Winogrande (50.2%) show the limits of 400M parameters and 10B tokens training. We are hitting the "Reasoning Wall" which confirms we need to scale to (hopefully) unlock deeper common sense. As you can see in the visualization (to be released soon on HF), the FineWeb-EDU bias is strong. The model is convinced it is in a classroom ("In this course, we explore..."). The Instruct Model is not ready yet and we are currently using curriculum learning to test model plasticity. Source code and weights will not be released yet. This is not a fork or a fine-tune: the base model is built in-house at https://www.ethicalabs.ai/, with novel components that do not exist in current open libraries. π€ Call for Collaboration: I am looking for Peer Reviewers interested in recurrent/hybrid architectures. If you want to explore what lies beyond Transformers, letβs connect! Training diary: https://huggingface.co/ethicalabs/Kurtis-EON1
View all activity
Organizations
None yet
FM-1976
's Spaces
9
Sort:Β Recently updated
Runtime error
Gemma3-1b-it GradioCHAT
π¦
Gradio Chatbot with Gemma 3 1B Instruct
Sleeping
1
TweetGeneration
π
Gradio and HF free tools - from articles to tweets
Build error
Gemma2 2B Reflection
π
Sleeping
OuteWorlderAI LiteMistral150M
π
Build error
Gemma2 2B Instruct ST
π’
Runtime error
5
StableLM-Zepyhr-3B Playground
π
Runtime error
2
Starling7B PlayGround
π¦
Sleeping
1
MyFirstMiniChat
π
No application file
MyFAVModels
π