docs: Update AI stack future plans - N5 Air with Ryzen AI 5 255
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -146,17 +146,17 @@ Infrastructure docs are pre-copied to that location.
|
||||
|
||||
---
|
||||
|
||||
## Future: Mac Mini M4 Upgrade
|
||||
## Future: N5 Air (Ryzen AI 5 255) Upgrade
|
||||
|
||||
Planning to migrate AI stack to Mac Mini M4 (32GB):
|
||||
Planning to migrate AI stack to N5 Air with Ryzen AI 5 255:
|
||||
|
||||
| Metric | N100 (current) | M4 (planned) |
|
||||
| Metric | N100 (current) | Ryzen AI 5 255 |
|
||||
|--------|----------------|--------------|
|
||||
| Speed | ~1 tok/s | ~15-20 tok/s |
|
||||
| Max model | 7B | 70B+ |
|
||||
| Response time | 30-90s | 3-5s |
|
||||
| Speed | ~1 tok/s | ~5-8 tok/s |
|
||||
| Max model | 7B | 14B-32B |
|
||||
| Response time | 30-90s | 5-15s |
|
||||
|
||||
The M4 unified memory architecture is ideal for LLM inference.
|
||||
Features XDNA NPU (16 TOPS) for potential AI acceleration. DDR5 + 6c/12t CPU will significantly improve inference.
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user