Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

prithivMLmods 
posted an update about 20 hours ago
view post
Post
1371
LTX-2 Camera-Control LoRA demo with dolly-in/out and dolly-left/right is now available on Hugging Face, paired with ltx-2-19b-distilled-lora for fast inference. It also includes dynamic GPU duration adjustments for long video generations. Click the related Space links below.

🤗Try it now on : prithivMLmods/LTX-2-LoRAs-Camera-Control-Dolly
⭐Github: https://github.com/PRITHIVSAKTHIUR/LTX-2-LoRAs-Camera-Control-Dolly
🕹️Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection

To learn more, visit the app page or the respective model pages.
Ujjwal-Tyagi 
posted an update 3 days ago
view post
Post
2485
For more better details and analysis, you can read the article here: https://huggingface.co/blog/Ujjwal-Tyagi/steering-not-censoring, We are sleepwalking into a crisis. I am deeply concerned about AI model safety right now because, as the community rushes to roll out increasingly powerful open-source models, we are completely neglecting the most critical aspect: safety. It seems that nobody is seriously thinking about the potential consequences of unregulated model outputs or the necessity of robust guardrails. We are essentially planting the seeds for our own destruction if we prioritize raw performance over security.

This negligence is terrifyingly evident when you look at the current landscape. Take Qwen Image 2512, for example; while it delivers undeniably strong performance, it has incredibly weak guardrails that make it dangerous to deploy. In stark contrast, Z Image might not get as much hype for its power, but it has much better safety guardrails than Qwen Image 2512.

It is imperative that the open-source community and developers recognize that capability without responsibility is a liability. We must actively work on protecting these models from bad actors who seek to exploit them for malicious purposes, such as generating disinformation, creating non-consensual imagery, or automating cyberattacks. It is no longer enough to simply release a powerful model; we must build layers of defense that make it resistant to jailbreaking and adversarial attacks. Developers need to prioritize alignment and robust filtering techniques just as much as they prioritize benchmark scores. We cannot hand such potent tools to the world without ensuring they have the safety mechanisms to prevent them from being turned against us.
·
branikita 
posted an update 1 day ago
view post
Post
1327
Our engineer Alan from https://robonine.com team has assembled the mechanical frame of our 6-DoF manipulator prototype - without servo motors for now. At this stage we are evaluating how easy the structure is to assemble, checking for any mechanical play, and validating the kinematics.

Good news: the structure feels solid and Alan reports no detectable backlash so far.
DawnC 
posted an update 2 days ago
view post
Post
2736
VividFlow: AI Image Enhancement & Video Generation 🎬🎨

Bring your images to life with cinematic motion AND create stunning AI backgrounds! VividFlow combines professional-grade video generation with intelligent background replacement in one streamlined platform.

🎭 Dual Creative Powers
Transform any static image into high-quality dynamic videos with smooth, natural motion ranging from 0.5 to 5 seconds. Choose from curated motion templates across 8 categories designed for portraits, products, landscapes, and artistic content. Create photorealistic backgrounds by selecting from 24 professionally crafted scene presets spanning studios, natural environments, urban settings, and artistic atmospheres...etc.

⚡ Optimized Performance
Video generation currently completes in 4-5 minutes with active optimization underway to dramatically reduce processing time. Background replacement finishes in 30-40 seconds after initial loading. The independent dual-tab design ensures smooth workflow without performance conflicts.

🎯 Complete Creative Control
Achieve perfectly consistent results with seed-based reproducibility and adjustable duration for video generation. Background creation offers flexible composition modes, precision edge softening for challenging subjects, and instant mask preview for quality verification.

📈 Continuous Innovation
Ongoing optimization targets significantly faster video generation through advanced model preparation. Future enhancements include expanded template libraries, batch processing capabilities, and industry-specific presets shaped by community feedback.

👉 Try it now: DawnC/VividFlow

Support development with a ❤️ — your engagement shapes future priorities!
#AI #ImageToVideo #BackgroundGeneration #VideoGeneration
  • 2 replies
·
mindchain 
posted an update about 19 hours ago
view post
Post
962
Claude Code Self & Continual Learning

Hey everyone! 👋

30 GitHub Stars in 4 Days - Thank You!

I'm really grateful for the positive response to the Claude Reflect System. In just 4 days, 30 developers have shown interest by starring the project. Thank you so much!

What Is Claude Reflect?

Correct once, never again. Claude Reflect helps Claude Code remember your corrections and preferences across sessions. Instead of repeating the same feedback, the system learns and applies it automatically.

Main Features:

🧠 Learning System
- Detects corrections and preferences from conversations
- Stores them permanently in skill files
- Applies learnings in future sessions

🔒 Safety First
- Automatic backups before changes
- YAML validation
- Git version control

⚡ Two Modes
- Manual: Run /reflect when you want
- Auto: Reflects automatically at session end

How It Works

If you correct Claude to use pytest instead of unittest, this preference gets saved. Next time, Claude will remember and use pytest automatically. It's that simple.

Getting Started

1. Clone the repository
2. Install dependencies
3. Activate the skill
4. Try it out!

The python-project-creator example shows how the system learns from your feedback.

Give It a Try

https://github.com/haddock-development/claude-reflect-system

Feel free to check it out, give feedback, or contribute. Every bit of input helps improve the project!

Thank you so much for your support!

---
#ClaudeCode #AI #MachineLearning #ContinualLearning #OpenSource #Developer #Coding #Python #Productivity #DevTools #GitHub #SoftwareDevelopment #Programming #AIAssistant #DeveloperTools #CodeQuality #Tech



Feel free to give it a try by yourself.
https://github.com/haddock-development/claude-reflect-system
davidmezzetti 
posted an update 1 day ago
view post
Post
1431
🥃 Distilling Tiny Embeddings. We're happy to build on the BERT Hash Series of models with this new set of fixed dimensional tiny embeddings models.

Ranging from 244K parameters to 970K and 50 dimensions to 128 dimensions these tiny models pack quite a punch.

Use cases include on-device semantic search, similarity comparisons, LLM chunking and Retrieval Augmented Generation (RAG). The advantage is that data never needs to leave the device while still having solid performance.

https://huggingface.co/blog/NeuML/bert-hash-embeddings
MonsterMMORPG 
posted an update about 10 hours ago
view post
Post
479
NVFP4 With CUDA 13 Full Tutorial, 100%+ Speed Gain + Quality Comparison & New Cheap Cloud SimplePod

Full tutorial: https://www.youtube.com/watch?v=yOj9PYq3XYM

Finally NVFP4 models has arrived to ComfyUI thus SwarmUI with CUDA 13. NVFP4 models are literally 100%+ faster with minimal impact on quality. I have done grid quality comparison to show you the difference on FLUX 2, Z Image Turbo and FLUX 1 of NVFP4 versions. To make CUDA 13 work, I have compiled Flash Attention, Sage Attention & xFormers for both Windows and Linux with all of the CUDA archs to support literally all GPUs starting from GTX 1650 series, RTX 2000, 3000, 4000, 5000 series and more.

In this full tutorial, I will show you how to upgrade your ComfyUI and thus SwarmUI to use latest CUDA 13 with latest libraries and Torch 2.9.1. Moreover, our compiled libraries such as Sage Attention works with all models on all GPUs without generating black images or videos such as Qwen Image or Wan 2.2 models. Hopefully LTX 2 presets and tutorial coming soon too. Finally, I introduce a new private cloud GPU platform called as SimplePod like RunPod. This platform has all the features of RunPod same way but much faster and cheaper.

📂 Resources & Links:
ComfyUI Installers: [ https://www.patreon.com/posts/ComfyUI-Installers-105023709 ]

SimplePod: [ https://simplepod.ai/ref?user=secourses ]

SwarmUI Installer, Model Auto Downloader and Presets: [ https://www.patreon.com/posts/SwarmUI-Install-Download-Models-Presets-114517862 ]

How to Use SwarmUI Presets & Workflows in ComfyUI + Custom Model Paths Setup for ComfyUI & SwarmUI Tutorial: [ https://youtu.be/EqFilBM3i7s ]

SECourses Discord Channel for 7/24 Support: [ https://discord.com/invite/software-engineering-courses-secourses-772774097734074388 ]

NVIDIA NVFP4 Blog Post More: [ https://developer.nvidia.com/blog/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference/ ]
  • 1 reply
·
unmodeled-tyler 
posted an update about 16 hours ago
view post
Post
804
NEW MODEL: vanta-research/mox-8b

Hey everyone! I changed up my approach with this one a bit. Mox was designed with the following characteristics:

- self coherence
- direct opinions
- epistemic confidence
- grounded meta-awareness
- reasoned refusals

I've been thinking a lot about what "helpfulness" means lately. Commonly in AI, that looks like fulfilling user requests as closely as possible as long as the request isn't unsafe.

But I wanted to know what it was like to build a model that might be helpful in the same way a human would be.

For example, if you ask Mox to write a 10 page paper on the cultural significance of staplers, Mox will probably refuse, tell you that wouldn't be useful or helpful to ANYBODY and recommend a different, but more useful approach.

Mox is still very much a work in progress, but I think that this is a good starting point! I'm already generating more datasets to add more elements to Mox's persona in future versions, which you should see on the hub soon!

kanaria007 
posted an update about 23 hours ago
view post
Post
792
✅ New Article: *Designing, Safeguarding, and Evaluating Learning Companions* (v0.1)

Title:
🛡️ Designing, Safeguarding, and Evaluating SI-Core Learning Companions
🔗 https://huggingface.co/blog/kanaria007/designing-safeguarding-and-evaluating

---

Summary:
Most “AI tutoring” talks about prompts, content, and engagement graphs.
But real learning companions—especially for children / ND learners—fail in quieter ways: *the system “works” while stress rises, agency drops, or fairness erodes.*

This article is a practical playbook for building SI-Core–wrapped learning companions that are *goal-aware (GCS surfaces), safety-bounded (ETH guardrails), and honestly evaluated (PoC → real-world studies)*—without collapsing everything into a single score.

> Mastery is important, but not the only axis.
> *Wellbeing, autonomy, and fairness must be first-class.*

---

Why It Matters:
• Replaces “one number” optimization with *goal surfaces* (and explicit anti-goals)
• Treats *child/ND safety* as a runtime policy problem, not a UX afterthought
• Makes oversight concrete: *safe-mode, human-in-the-loop, and “Why did it do X?” explanations*
• Shows how to evaluate impact without fooling yourself: *honest PoCs, heterogeneity, effect sizes, ethics of evaluation*

---

What’s Inside:
• A practical definition of a “learning companion” under SI-Core ([OBS]/[ID]/[ETH]/[MEM]/PLB loop)
• GCS decomposition + *age/context goal templates* (and “bad but attractive” optima)
• Safety playbook: threat model, *ETH policies*, ND/age extensions, safe-mode patterns
• Teacher/parent ops: onboarding, dashboards, contestation/override, downtime playbooks, comms
• Red-teaming & drills: scenario suites by age/context, *measuring safety over time*
• Evaluation design: “honest PoC”, day-to-day vs research metrics, ROI framing, analysis patterns
• Interpreting results: *effect size vs p-value*, “works for whom?”, go/no-go and scale-up stages

---

📖 Structured Intelligence Engineering Series
Reality123b 
posted an update 2 days ago