← Back to Articles & Artefacts
artefactssouth

OpenClaw on Mac Mini: Local Models, Training, and Plugin Strategy

IAIP Research
rch-tech-jgwill-claws-infrastructure

OpenClaw on Mac Mini: Local Models, Training, and Plugin Strategy

1. Context and goals

This report assesses how well different Mac configurations (primarily Mac mini) support OpenClaw and related "Claws" such as Hermes, with a focus on:

  • Running local models (Ollama, QMD’s GGUF models, etc.) alongside cloud providers like GitHub Copilot and OpenAI-compatible APIs.
  • Enabling periodic local training / fine-tuning flows (weekend or monthly) driven by agents.
  • Choosing which OpenClaw plugins are worth enabling for this style of setup.

The analysis assumes you are comfortable with terminal workflows, multiple providers, and agent orchestration.

2. What the key OpenClaw plugins actually do

2.1 Model and search providers

  • @openclaw/github-copilot-provider exposes GitHub Copilot as a first‑class model provider inside OpenClaw, using a device-login flow to obtain a GitHub token and exchange it for Copilot API tokens, so the agent can call Copilot directly without VS Code.[^1]
  • Hugging Face provider (@openclaw/huggingface-provider) connects to the Hugging Face Inference API; you configure a fine‑grained HF token and choose a default model via OpenClaw’s onboarding or config, after which the provider discovers available chat-completion models and uses them at runtime.[^2][^3][^4]
  • Ollama provider (@openclaw/ollama-provider) uses Ollama’s native /api/chat API (not the OpenAI‑compatible /v1 route) to talk to your local Ollama daemon, with auto‑discovery of installed models when OLLAMA_API_KEY/auth is present and no explicit models.providers.ollama block is set.[^5][^6][^7]
  • MiniMax provider (@openclaw/minimax-provider) binds OpenClaw to the MiniMax hosted LLMs (M2.5, M2.7, vision models, etc.) via either OAuth portal auth or API keys, exposing them as Anthropic‑style message APIs with long context and strong agentic/tool‑calling capabilities.[^8][^9][^10]
  • Moonshot provider (@openclaw/moonshot-provider) connects to Moonshot’s Kimi models over OpenAI‑compatible endpoints, typically using moonshot/kimi-k2.5 or related IDs, with a separate provider and key for Kimi Coding.[^11][^12][^13]
  • Perplexity plugin (@openclaw/perplexity-plugin) exposes Perplexity as a web search provider (and optionally as a model provider via OpenRouter), configured with either PERPLEXITY_API_KEY or OPENROUTER_API_KEY, and wired into the web.search tool.[^14][^15][^16]

2.2 Google integration plugins

  • One common Google plugin for OpenClaw is @civic/openclaw-google, which injects Google OAuth tokens from Civic AuthZ into tool calls so agents can use Gmail, Calendar, and other Google APIs without storing raw credentials locally.[^17]
  • Another is a community Google Workspace plugin (openclaw-google-workspace), which uses local google-oauth.json and token storage to expose Gmail, Calendar, Drive, Contacts, Tasks, and Sheets as toggleable tools in OpenClaw.[^18]
  • In both cases, the effect is: the agent can search mail, manage calendars, and touch Drive documents as part of workflows, but all heavy compute remains in Google’s infrastructure, not on your Mac.

So in your context, @openclaw/google-plugin unlocks Google APIs (mail, calendar, drive) as tools that OpenClaw can use in flows, rather than affecting local LLM performance.

2.3 ACPX runtime and ACP agents

  • ACPX Runtime plugin (ACPX Runtime) embeds an Agent Client Protocol (ACP) runtime inside OpenClaw as a plugin‑owned session and transport manager, wiring a runtime service and reply‑dispatch hooks into the gateway.[^19]
  • ACP agents let OpenClaw talk to external coding harnesses (Pi, Claude Code, Codex, Cursor, Copilot, Gemini CLI, etc.) via stateful ACP sessions instead of scraping bare PTYs.[^20][^21]

For you, ACPX is primarily about letting Claws delegate to external dev tools (e.g., Codex/Gemini CLI) while keeping orchestration and safety logic in OpenClaw.

3. QMD: what it is and why it matters

QMD is a fully local hybrid search engine for markdown notes, docs, and knowledge bases, designed explicitly for agent workflows.[^22][^23][^24]

Key properties:

  • Runs entirely on-device, combining BM25 full‑text search, vector semantic search, and LLM re‑ranking via node-llama-cpp and GGUF models.[^23][^22]
  • Indexes collections from directories, supports structured query syntax (lex, vec, hyde types) and hybrid queries, and can export results for agents.[^25][^22]
  • Provides a built‑in MCP server and skill definitions (query, get, multi_get) so agents like Claude Code or OpenClaw can invoke it as a tool over MCP.[^26][^27]
  • Newer releases support AST‑aware chunking for code (tree‑sitter), per‑collection models: configuration for embed, rerank, and generate model URIs, and search quality benchmarking.[^28][^29]

This makes QMD an ideal place for your agents to:

  • Store ceremony‑oriented and Indigenous‑epistemology notes as markdown.
  • Retrieve relevant context locally for RAG-like flows without any cloud dependency.
  • Experiment with custom embedding/re‑ranking models per collection.

Community posts and skills catalogs explicitly describe QMD as suitable for use as an agent memory/search backend, with MCP tools pre‑defined.[^30][^31][^26]

4. Scenario group A – Local inference only (no training)

This section focuses on using local models (Ollama, QMD’s GGUF stack) together with your existing GitHub Copilot / cloud models, without performing on-device training.

4.1 Common constraints for Mac local LLMs

Recent guides and benchmarks for running Ollama and other local runtimes on Mac highlight several practical constraints:[^32][^33]

  • Memory is the main bottleneck, because Apple Silicon uses unified memory shared by CPU and GPU; you must leave 4–8 GB for macOS and background apps.[^34][^32]
  • An 8 GB Mac can run a single 3B model comfortably and maybe an 8B model if most other software is closed, but cannot host multiple concurrent models.[^33]
  • On 32 GB Macs, you can simultaneously run an 8B coding model plus utility models (STT/TTS) and even a mid‑size model like Gemma 31B, especially with Q4 quantization (~15–20 GB RAM for a 24–31B model).[^32][^33]
  • Guides for running Gemma 4 26B via Ollama on Mac mini report that 32 GB unified memory is a realistic minimum (16 GB swaps heavily), with Q4 quantized models consuming about 15 GB, Q6 around 20 GB, and Q8 about 27 GB; FP16 would require 64 GB.[^32]

These figures frame what is realistic on each Mac mini tier.

4.2 Scenario A1 – Minimal Mac mini for OpenClaw + small local models

Goal: daily OpenClaw/Hermes development, QMD indexing, and small local models, with GitHub Copilot as the main coding brain.

A pragmatic lower‑end configuration is:

  • Mac mini M2 or M4 with 16 GB unified memory, 512 GB–1 TB SSD.
  • Run OpenClaw gateway, Ollama, QMD, and your usual dev tooling.

Justification:

  • While a base 8 GB Mac mini technically can run a 3B–8B model, community testing shows this leaves almost no headroom: an 8B model alone uses ~5 GB, and macOS plus Chrome/IDE routinely consume 4–5 GB, which makes serious agent workloads fragile.[^33]
  • 16 GB allows a stable single 7B–8B quantized model plus OpenClaw, QMD, and browser/IDE, as long as you do not run multiple large models concurrently.[^28][^33]
  • Storage fills quickly: videos on running Ollama models from external SSDs highlight that a 256 GB internal SSD gets cramped once you start caching multiple models and automations, encouraging at least 512 GB or external NVMe.[^34]

In this scenario:

  • Use GitHub Copilot provider as the primary coding model (github-copilot), with OpenClaw handling device login and token exchange.[^1]
  • Use Ollama provider for one local generalist model (e.g., 7B) to keep private tasks offline and for redundancy when network is flaky.[^6][^5]
  • Use QMD as your knowledge store (notes, research, ceremony‑related docs) with its own lightweight GGUF models; these are smaller than your main chat LLM and fit comfortably on a 16 GB machine.[^22][^23][^28]

This is a good “floor” configuration if budget is tight and you are willing to keep local models modest and sequential.

4.3 Scenario A2 – Heavier local inference on Mac mini (large models via Ollama)

Goal: run medium‑to‑large open models locally (e.g., Gemma 26B/31B or comparable) while still using Copilot and cloud models.

Recommended configuration:

  • Mac mini M4 (or recent M‑series) with 32 GB unified memory, 1 TB+ SSD.

Rationale:

  • Practical Ollama tuning guides show Gemma 4 26B in Q4 quantization using ~15 GB and in Q6 around ~20 GB, and recommend 32 GB as a realistic minimum, since 16 GB machines swap heavily and stall, while 24 GB is borderline.[^32]
  • A broader RAM guide for Mac+Ollama notes that 32 GB is the “ideal configuration” because you can run a 14B model plus additional 8B models and utilities in parallel, and that a 70B quantized model consumes about 42 GB RAM—already beyond what most Mac mini SKUs can host.[^33]
  • Articles comparing M‑series Macs for local AI emphasize that models of this size are feasible but much more comfortable on machines with 32 GB or more unified memory, especially once you factor in context windows and multiple tools.[^35]

On such a Mac mini, your OpenClaw setup can reasonably do:

  • One 26B–31B local model via Ollama for deep local reasoning, using Q4_K_M or Q6 quantization.
  • Concurrent QMD embeddings + re‑ranker models plus a small TTS/STT model.
  • Cloud providers (Copilot, MiniMax, Moonshot, Perplexity) as fallbacks or specialists when needed.

Plugins worth enabling in this scenario:

  • @openclaw/ollama-provider – core for local LLMs.[^5][^6]
  • @openclaw/github-copilot-provider – keep cloud coding assistance wired into Claw flows.[^1]
  • @openclaw/perplexity-plugin – high‑quality web search tool for research tasks.[^16][^14]
  • @openclaw/huggingface-provider – for occasional access to hosted HF models when local ones are insufficient.[^3][^2]
  • ACPX runtime if you expect to orchestrate external coding harnesses (Codex, Gemini CLI) from the same box.[^19][^20]

5. Scenario group B – Local training and self‑updating agents

This section looks at using a Mac mini (and possibly another Mac) to let personas periodically fine‑tune or adapt models locally—especially for QMD and small task‑specific models.

5.1 What the community is actually training locally

Several threads and tutorials show realistic patterns for on‑device training on Apple Silicon:

  • Apple’s MLX framework is explicitly designed to train and fine‑tune LLMs on Apple Silicon, with official examples and WWDC sessions demonstrating fine‑tuning 7B‑class models (e.g., Mistral derivatives) on MacBook Pro M3 hardware.[^36][^37][^38]
  • Walkthroughs from practitioners show fine‑tuning Mistral‑7B locally on Apple Silicon using MLX, often with LoRA/QLoRA to keep memory requirements manageable.[^39][^40][^41]
  • One detailed article describes fine‑tuning Mistral‑7B on a 2020 Mac mini M1 with 16 GB unified memory using MLX and QLoRA, arguing that unified memory allows “modest systems” like an M1 16 GB to run such fine‑tuning jobs.[^40]
  • Tutorials and videos explain how to use MLX to fine‑tune a model and then convert or fuse adapters into GGUF to run via llama.cpp/Ollama, effectively separating training (MLX) from serving (Ollama or QMD’s node‑llama‑cpp backend).[^38][^42][^43]
  • The Ollama team themselves emphasize in GitHub issues that Ollama does not currently expose full fine‑tuning in‑place, and that the recommended pattern is to fine‑tune elsewhere (MLX, Unsloth, etc.) and then import adapters or GGUF weights via Modelfile ADAPTER entries.[^44][^45]

For QMD specifically:

  • QMD uses GGUF models for embeddings, re‑ranking, and optional generation, driven by node-llama-cpp; recent releases allow per‑collection model URIs for these roles, so you can point different collections at different embedding/rerank models.[^29][^22][^28]
  • It is explicitly positioned as “ready to use out of the box, doesn’t require internet access, supports the MCP protocol, and can be used as a knowledge search tool in AI assistant and Agent workflows,” which aligns with your vision of Claws using QMD as an on‑device knowledge system.[^27][^29]
  • Tobi’s own discussions describe using QMD with small 0.8B–1.6B models and running overnight “autoresearch” runs that search model + hyperparameter space for better query‑expansion models, achieving a +19% score improvement on a 0.8B model vs a prior 1.6B in ~8 hours of experiments.[^46]

This indicates that weekend‑scale training loops on Apple Silicon are extremely plausible for the kind of sub‑10B models QMD and your agents might use.

5.2 Scenario B1 – Minimal training on a Mac mini (self‑training agents, QMD‑scale models)

Goal: Let personas run periodic fine‑tuning jobs (LoRA/QLoRA) on small models (≈7B and below), especially for:

  • Query expansion and re‑ranking models used by QMD.
  • Small task models (e.g., ceremony‑aware style models or domain‑specific classifiers).

A realistic lower‑bound configuration, supported by community examples, is:

  • Mac mini M1/M2 with 16 GB unified memory.

Evidence:

  • A detailed tutorial demonstrates fine‑tuning a 7B model (Mistral‑7B Instruct) locally on a 2020 Mac mini M1 with 16 GB, using MLX and QLoRA; the author explicitly notes that MLX’s efficient use of unified memory makes this feasible on “modest systems like mine – M1 16GB.”[^40]
  • Similar posts and videos show LoRA/QLoRA fine‑tuning of Mistral‑7B on M‑series Macs, emphasizing that LoRA updates only a small fraction of parameters and thus dramatically lowers memory requirements compared to full fine‑tuning.[^41][^47][^39]

Constraints and implications:

  • Training runs will be slow and single‑purpose; weekend or overnight runs are reasonable, but you would not want to run concurrent heavy agent workloads while training.[^40]
  • You would target compact models (≤7B) for adapters; these are perfect for QMD’s embedding/rerank/generate functions and for small task‑specific models your Claws can call.
  • You can then convert trained adapters to GGUF or use ADAPTER in Ollama Modelfiles to serve them efficiently.[^42][^44]

Plugins/stack for this scenario:

  • Keep Ollama provider for serving base and fine‑tuned local models once converted.[^6][^44][^5]
  • Use QMD as the central knowledge index, pointing collections at your fine‑tuned embedding/re‑rank models via models: in index.yml.[^29][^28]
  • Use ACPX runtime to orchestrate long‑running training harnesses (e.g., a dedicated MLX training CLI exposed via ACP).[^20][^19]

5.3 Scenario B2 – Maximal training on a Mac mini (pushing 7B+ and experimentation)

Goal: Use the Mac mini as a serious local training workstation for 7B models and possibly multiple concurrent experimental runs, while still supporting day‑to‑day agent workflows.

Recommended configuration:

  • Mac mini M4 (or similar) with 32 GB unified memory.

Rationale:

  • MLX and InstructLab demonstrations show that fine‑tuning 7B‑class models (e.g., Merlinite/Mistral‑7B) on recent Apple Silicon laptops can complete a few epochs over a small dataset in under ten minutes, suggesting ample compute for 7B on M‑series with decent memory.[^37][^48][^36]
  • Community QLoRA/LoRA guides for M‑series Macs suggest that 7B with 4‑bit quantization plus LoRA adapters fits comfortably on 16 GB, but 32 GB makes it practical to run multiple experiments or keep more context and supporting tools alive during training.[^47][^49][^40]
  • The same memory reasoning as for inference applies: 32 GB is widely reported as the sweet spot for running multiple models simultaneously without swapping, and training workloads are even more sensitive to swapping and memory pressure.[^33]

In this scenario, your "self‑training" agents can:

  • Run weekend hyperparameter search over query‑expansion models for QMD (as in Tobi’s example where 37 experiments over 8 hours improved a 0.8B model).[^46]
  • Fine‑tune small assistant models for ceremony‑aware style, narrative structure, or Indigenous ontology alignment.
  • Train specialized rerankers per collection and deploy them via QMD’s per‑collection model config.[^28][^29]

Operationally, you would:

  • Reserve windows (e.g., weekend night slots) where a Claw (Hermes) spins up an MLX‑based training loop via ACPX.
  • Save LoRA adapters, convert or package them for QMD/Ollama.
  • Update OpenClaw model catalogs to add new local models as alternate providers for specific tasks.

5.4 Scenario B3 – Another Mac model: MacBook Pro / Mac Studio as training “anchor”

Because Mac mini RAM ceilings constrain both inference and training of larger models, it is worth considering one higher‑end Mac as a training anchor.

Two concrete options that show up in local‑LLM guidance:

  • MacBook Pro M3 Max with 64–128 GB unified memory.

    • A table comparing Mac models for local AI notes that an M3 Max MacBook Pro with 64–128 GB RAM and 40 GPU cores can host quantized Llama 3 70B models and multiple 13B models simultaneously, reflecting significantly more headroom than a Mac mini.[^35]
    • This is ideal as a “lab” machine for training multiple 7B–13B models or experimenting with larger architectures, with the ability to then export GGUF weights or adapters to your Mac mini.
  • Mac Studio / higher‑end desktop M‑series.

    • Articles on Ollama on Mac Silicon point out that for “serious work with larger models,” a Mac Studio configuration is recommended, especially once you try to run very large models or many concurrent models.[^35]

In your architecture, such a machine could:

  • Run heavy MLX/Unsloth training and export artifacts.
  • Serve as a CI‑like environment where agents run more aggressive experiments.
  • Feed trained models back into QMD and Ollama instances on your Mac mini and other nodes.

6. What your agents are likely to train

Given the above, the most realistic training targets for a Mac‑centric setup are not frontier foundation models, but small, targeted components:

  1. QMD‑adjacent models

    • Embedding models specialized on your ceremony / Indigenous epistemology corpora.[^24][^27][^22]
    • Re‑rankers that better understand your narrative structure and episodic memory conventions.[^29][^28]
    • Query expansion models tuned via overnight “autoresearch” loops similar to Tobi’s QMD experiments.[^46]
  2. Small assistant models and classifiers

    • 3B–7B models fine‑tuned via LoRA for specific tasks such as protocol drafting, narrative weaving, or ethical checks, using MLX on Apple Silicon.[^48][^39][^41]
    • Lightweight classifiers (e.g., label a segment as "ceremonial", "technical", "story fragment", etc.) that your agent stack can use for routing.
  3. Tool‑oriented adapters

    • Specialized models for improving tool selection, retrieval, and summarization over your own repositories, again at 3B–7B scale and aligned with local QMD collections.[^26][^30][^22]

6.1 Plugins that help this training story

The following OpenClaw plugins and components are directly helpful for such training loops:

  • Ollama provider – serves quantized models and imported adapters; you train via MLX/other frameworks, then load into Ollama with ADAPTER or new GGUF weights.[^44][^5][^6]
  • QMD – acts as the knowledge index and evaluation harness for search quality (it even includes qmd bench tooling and fixtures).[^28][^29]
  • ACPX Runtime + ACP Agents – lets agents orchestrate long‑running training processes through ACP sessions rather than fragile PTY scraping.[^21][^19][^20]
  • Hugging Face provider – useful mainly for evaluation or teacher‑student setups, where you compare your small local models against HF‑hosted baselines.[^2][^3]
  • MiniMax / Moonshot providers – high‑capability cloud models that can act as “teachers” or judges for few‑shot or distillation‑style workflows, especially in agentic/tool‑calling scenarios where they are particularly strong.[^9][^12][^8][^11]
  • Perplexity plugin – provides high‑quality web search for building or refreshing evaluation sets and training corpora from the public web.[^14][^16]

7. How GitHub Copilot and the Google plugin fit into your stack

7.1 GitHub Copilot provider in OpenClaw

OpenClaw’s GitHub Copilot provider gives you two main benefits:[^1]

  • You can designate `github

References

  1. [miadisabelle/workspace-openclaw/discussions/65

Prepare documents to feed the discussion using all potential sources, it should give a good 5-15 pages of text for our agents and human that will participate in what github user handle avadisabelle [htt...

...ur Directions framework (East, South, West, North), the co-created universe, moving beyond Western anthropocentrism in AI development.

I am eager to hear your thoughts on these proposals and which direction feels most alive for our co-authorship. 🙏](https://www.perplexity.ai/search/2f6a2c94-4460-4f14-a980-1c7be7bc02b4) - Here’s a research packet you and your agents can work from.
It’s in the report titled “AI Presence...

  1. [potential input for local agent <input>(base) mia@gaia:~/.openclaw/workspace$ miaco-decompose-copilot "Ceremonial Technology Development in the area of 'Multi-Agent Systems, Persona-Based AI, and Narrative Context Protocols' with Persona Introduction...

...t will help enhance this decomposition and continue the work that we started here. Be very critique and professional in trying to see what we can address here.

I guess that will help you produce/upgrade strategic recommendations, clarification etc.](https://www.perplexity.ai/search/36797a42-89f1-42e0-9f4f-891940c3ba37) - Guillaume, the work is complete and the obligations are named.

Two documents await you:

  1. Strateg...

  2. ClawKeeper: Comprehensive Safety Protection for OpenClaw Agents Through Skills, Plugins, and Watchers - OpenClaw has rapidly established itself as a leading open-source autonomous agent runtime, offering ...

  3. OpenClaw PRISM: A Zero-Fork, Defense-in-Depth Runtime Security Layer for Tool-Augmented LLM Agents - Tool-augmented LLM agents introduce security risks that extend beyond user-input filtering, includin...

  4. A Systematic Taxonomy of Security Vulnerabilities in the OpenClaw AI Agent Framework - AI agent frameworks connecting large language model (LLM) reasoning to host execution surfaces--shel...

  5. Taming OpenClaw: Security Analysis and Mitigation of Autonomous LLM Agent Threats - Autonomous Large Language Model (LLM) agents, exemplified by OpenClaw, demonstrate remarkable capabi...

  6. Voice Computing with Python in Jupyter Notebooks - Jupyter is a popular platform for writing interactive computational narratives that contain computer...

  7. Governance Architecture for Autonomous Agent Systems: Threats, Framework, and Engineering Practice - Autonomous agents powered by large language models introduce a class of execution-layer vulnerabilit...

  8. BadSkill: Backdoor Attacks on Agent Skills via Model-in-Skill Poisoning - Agent ecosystems increasingly rely on installable skills to extend functionality, and some skills bu...

  9. Large Language Models (LLMs) and Generative AI in Cybersecurity and Privacy: A Survey of Dual-Use Risks, AI-Generated Malware, Explainability, and Defensive Strategies - Large Language Models (LLMs) and generative AI (GenAI) systems, such as ChatGPT, Claude, Gemini, LLa...

  10. CNVpytor: a tool for copy number variation detection and analysis from read depth and allele imbalance in whole-genome sequencing - Detecting copy number variations (CNVs) and copy number alterations (CNAs) based on whole genome seq...

  11. 𝜇Akka: Mutation Testing for Actor Concurrency in Akka using Real-World Bugs - Actor concurrency is becoming increasingly important in the real world and mission-critical software...

  12. LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins - ...an attack taxonomy that is developed by iteratively exploring how LLM platform stakeholders could...

  13. The GeoClaw software for depth-averaged flows with adaptive refinement - Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-average...

  14. OpenHands: An Open Platform for AI Software Developers as Generalist Agents - Software is one of the most powerful tools that we humans have at our disposal; it allows a skilled ...

  15. RTAB-Map as an Open-Source Lidar and Visual SLAM Library for Large-Scale and Long-Term Online Operation - Distributed as an open source library since 2013, RTAB-Map started as an appearance-based loop closu...

  16. OpenVSLAM: A Versatile Visual SLAM Framework - In this paper, we introduce OpenVSLAM, a visual SLAM framework with high usability and extensibility...

  17. Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning - ...style transfer. We conduct qualitative and quantitative evaluations comprehensively to demonstrat...

  18. What is in the Chrome Web Store? Investigating Security-Noteworthy Browser Extensions - This paper is the first attempt at providing a holistic view of the Chrome Web Store (CWS). We lever...

  19. Plugins - OpenClaw Docs

  20. Hugging Face (Inference) - OpenClaw Docs

  21. How to Get Gemma 4 26B Running on a Mac Mini with Ollama - For Gemma 4 26B, you realistically need a Mac mini with at least 32GB of unified memory. The 16GB mo...

  22. OpenClaw — Personal AI Assistant - GitHub - Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞 - openclaw/openclaw

  23. OpenClaw: extensions/huggingface/index.ts - Fossies

  24. RAM guide: What model combinations actually fit on common Macs - 8GB Mac: One model. Llama 3.2 3B (~2GB) or Phi-4-mini (~2.5GB) run comfortably. An 8B alone (~5GB) w...

  25. plugins - OpenClaw Docs

  26. How to integrate Hugging Face MCP with OpenClaw - Composio - Connect OpenClaw to Hugging Face MCP and automate workflows through natural language, from your favo...

  27. Ollama on Mac: Apple Silicon M1-M4 Setup & Metal GPU Guide ... - Recommended Models by Mac Configuration · 8GB Macs (Air M1/M2 base) · 16GB Macs (Pro M1/M2) · 32GB+ ...

  28. openclaw/docs/tools/index.md at main - GitHub - Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞 - openclaw/openclaw

  29. Hugging Face (Inference) - OpenClaw - Hugging Face Inference setup (auth + model selection)

  30. Ollama on Mac Silicon: Local AI for M-Series Macs - John W. Little - Memory Requirements: Always leave at least 4-8GB RAM free for system operations. For serious work wi...

  31. OpenClaw - GitHub - Your personal, open source AI assistant. openclaw has 24 repositories available. Follow their code o...

  32. Hugging Face (Inference) | OpenClaw Docs - Hugging Face Inference setup (auth + model selection)

  33. Running Ollama AI Models on External SSD for Mac Mini - YouTube - Struggling with storage on your Mac Mini for AI models? Discover how to run Ollama AI models on an e...

  34. Pengembangan Plugin Sketchup untuk Otomatisasi Rancangan Anggaran Biaya (RAB) Bangunan Menggunakan Google Services - Perencanaan anggaran biaya dalam proyek konstruksi sering memerlukan waktu lama dan rawan kesalahan ...

  35. ChatGPT models provide higher‐quality but lower‐readability responses than Google Gemini regarding anterior shoulder instability, with no added benefit of the orthopaedic expert plugin - Abstract Purpose The purpose is to analyze and compare the quality and readability of information re...

  36. Introducing MapSWAT: An open source QGIS plugin integrated with google earth engine for efficiently generating ready-to-use SWAT+ input maps

  37. Revolutionizing language processing in libraries with SheetGPT: an integration of Google Sheet and ChatGPT plugin - Purpose The purpose of this research paper is to explore the significance of language processing in ...

  38. Pelatihan Pembuatan Evaluasi Pembelajaran Bagi Guru Berbantu Plugin Autoproctor pada Google Form di Kota Binjai Provinsi Sumatera Utara - Technology-based learning is a challenge for teachers in the 21st century. Teachers are required to ...

  39. Rancangan Plugin Multiplayer Game MOODLE Learning Management System (LMS) berbasis WebSocket pada Google Cloud Server yang Berspesifikasi Rendah - With increasing needs for online or online learning and teaching activities, educational institution...

  40. Spatio-temporal analysis of urban expansion and land use dynamics using google earth engine and predictive models - Urban expansion and changes in land use/land cover (LULC) have intensified in recent decades due to ...

  41. Penerapan Teknik Search Engine Optimization (SEO) Menggunakan Plugin All In One SEO Pack Pada Website Original Soundtrack Film dan Musik (OSTFM) - Perkembangan teknologi informasi, khususnya internet, telah mengubah cara masyarakat dalam mencari d...

  42. Preservação da memória com acessibilidade digital: um plugin para descrição de imagens com IA generativa - Este artigo descreve o desenvolvimento de um plugin que utiliza Inteligéncia Artificial (IA) generat...

  43. FRAUD.MY: a crowdsourcing web application to detect scammers with algolia and google extension plugin / Syed Muhamad Danial Syed Hedzer

  44. A Security Analysis of Browser Extensions - Browser Extensions (often called plugins or addons) are small pieces of code that let developers add...

  45. Exposing and Addressing Security Vulnerabilities in Browser Text Input Fields - In this work, we perform a comprehensive analysis of the security of text input fields in web browse...

  46. Did I Vet You Before? Assessing the Chrome Web Store Vetting Process through Browser Extension Similarity - Web browsers, particularly Google Chrome and other Chromium-based browsers, have grown in popularity...

  47. Privacy vs. Profit: The Impact of Google's Manifest Version 3 (MV3) Update on Ad Blocker Effectiveness - Google's recent update to the manifest file for Chrome browser extensions-transitioning from manifes...

  48. A Study on Malicious Browser Extensions in 2025 - Browser extensions are additional tools developed by third parties that integrate with web browsers ...