Skip to content
View oxbshw's full-sized avatar

Block or report oxbshw

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
oxbshw/README.md

Hi, I'm Sayed Allam 👋

AI Engineer · LLM Agents Architect · AI Automation Developer · AI Evaluation Specialist

I build AI systems around LLM agents, RAG, automation, local inference, tool use, evaluation, and self-improving workflows.

I like working close to how AI agents behave: how they understand tasks, use context, choose tools, fail, recover, and improve through feedback.

Portfolio · GitHub · X · Bluesky


What I Focus On

  • LLM agents that use tools, APIs, memory, retrieval, and structured workflows
  • RAG systems for knowledge bases, internal assistants, and source-grounded answers
  • AI automation with n8n, webhooks, APIs, schedules, and approval flows
  • Agent identity files such as AGENTS.md, CLAUDE.md, GEMINI.md, llms.txt, and project memory
  • Local AI agents using Ollama, LM Studio, llama.cpp, vLLM, SGLang, GGUF, and local APIs
  • Evaluation workflows for prompts, agents, retrieval quality, tool calls, and model behavior
  • Self-improving agent patterns using feedback, traces, rubrics, memory, and regression tests
  • MCP-based tool integrations, coding agents, browser agents, and agentic workflows

How I Think About AI Systems

I think about AI systems as a loop, not a one-shot prompt.

flowchart TD
    A[Project Identity] --- B[Instructions and Memory]
    B --- C[Context and Knowledge]
    C --- D[Tools and APIs]
    D --- E[Agent Runtime]
    E --- F[Model Layer]
    F --- G[Evaluation and Observability]
    G --- H[Feedback and Improvement]
    H --- B

    C --- C1[RAG]
    C --- C2[Knowledge Base]
    C --- C3[Project Docs]

    D --- D1[MCP]
    D --- D2[Webhooks]
    D --- D3[Local Tools]

    E --- E1[Planning]
    E --- E2[Routing]
    E --- E3[Human Approval]

    F --- F1[Hosted Models]
    F --- F2[Local Models]

    G --- G1[Traces]
    G --- G2[Rubrics]
    G --- G3[Regression Tests]
Loading

A strong AI system needs more than a model. It needs identity, instructions, context, tools, memory, evaluation, observability, and a way to improve after each run.


What I Build

  • LLM agent systems
  • Local AI agents
  • RAG-based assistants
  • AI-ready knowledge systems
  • Agent instruction files
  • AI automation workflows
  • Internal AI copilots
  • Multi-agent research workflows
  • Tool-calling agent infrastructure
  • MCP-based tool integrations
  • Evaluation and feedback pipelines
  • Human-in-the-loop AI workflows
  • AI prototypes connected to real tools and data

Featured Projects

Project Description
LLM Agents Ecosystem Handbook A practical reference for understanding, building, evaluating, and deploying LLM agents.
Ultimate n8n AI Workflows AI automation workflows using n8n, LLMs, APIs, triggers, and business tools.
Context Engineering Experiments around retrieval, memory, context packing, long-context workflows, and token flow.
Deep Semantic Enhancer A prompt enhancement system for turning rough ideas into structured instructions.
Full System Prompts Research and examples around system prompts, instruction hierarchy, and model behavior.
Curated MCP Servers A curated collection of MCP resources for building tool-using AI agents.

Technical Focus

Agent Systems

  • Tool-calling agents
  • Planner-executor workflows
  • Router agents
  • Research agents
  • Retrieval agents
  • Multi-agent coordination
  • Agent memory and state
  • Agent handoffs
  • Human approval flows
  • Guardrails and tool validation
  • Agent tracing and debugging
  • MCP-based tool integration
  • Local agent runtime design

Agent Identity and AI-Readable Docs

  • AGENTS.md workflows
  • CLAUDE.md project memory
  • GEMINI.md instructions
  • llms.txt documentation maps
  • Cursor rules
  • Windsurf rules
  • Project memory files
  • System prompt files
  • Prompt contracts
  • Repository-specific agent guidance

Retrieval and Knowledge

  • RAG pipelines
  • Agentic RAG
  • GraphRAG
  • Hybrid search
  • Query rewriting
  • Query expansion
  • Reranking
  • Context packing
  • Context compression
  • Source-grounded answers
  • Knowledge freshness
  • Retrieval evaluation
  • Structured knowledge graphs

Evaluation and Improvement

  • Prompt evaluation
  • Agent evaluation
  • Tool-call evaluation
  • Trace-based evaluation
  • Retrieval quality evaluation
  • Rubric-based scoring
  • Failure analysis
  • Hallucination detection
  • Regression testing
  • Feedback loops
  • Self-improving workflows
  • Cost and latency monitoring

Automation and Workflows

  • n8n workflows
  • Webhooks
  • REST APIs
  • Scheduled workflows
  • Event-driven automation
  • Support automation
  • Research automation
  • Content workflows
  • Approval-based workflows
  • Human-in-the-loop systems
  • Monitoring and alerting workflows

Tools and Technologies

Languages and Backend

Python TypeScript JavaScript Node.js FastAPI Next.js REST APIs Webhooks JSON Schema Docker

LLM Agents and Frameworks

OpenAI Agents SDK LangGraph LangChain LlamaIndex CrewAI AutoGen Semantic Kernel Pydantic AI DSPy MCP

Coding Agents and Agent Rules

Codex CLI Claude Code Gemini CLI Cursor Windsurf Cline Roo Code AGENTS.md CLAUDE.md llms.txt

Retrieval and Data

Pinecone Qdrant Weaviate Chroma Milvus LanceDB FAISS pgvector PostgreSQL Supabase Redis Neo4j Elasticsearch

Automation and Workflows

n8n Make Zapier Activepieces Pipedream Dify Flowise Langflow

Models, Local Inference and Providers

I work with hosted APIs, local inference stacks, and model routing layers for agents that need privacy, speed, offline workflows, fallback routing, or full control over behavior.

OpenAI Claude Gemini Mistral Llama Qwen DeepSeek Gemma Phi Groq OpenRouter Together AI Fireworks AI LiteLLM

Local inference and agent runtime

Ollama LM Studio llama.cpp vLLM SGLang LocalAI Transformers GGUF Local REST APIs

Evaluation and Observability

LangSmith Langfuse Ragas DeepEval Arize Phoenix Braintrust Humanloop OpenTelemetry Promptfoo


Open to Collaboration

I'm open to collaborating on:

  • LLM agents and agentic workflows
  • Local AI agents
  • AI-ready knowledge systems
  • AI training and model evaluation
  • RAG and context engineering
  • n8n AI automation
  • MCP and tool-using agents
  • AI observability and evaluation tools
  • Voice agents and multimodal workflows
  • Open-source AI infrastructure

GitHub Stats

Sayed Allam GitHub Stats

Top Languages


Connect With Me


Building AI systems that connect language models with tools, knowledge, evaluation, automation, and real work.

Pinned Loading

  1. Full-system-prompts Full-system-prompts Public

    Full System Prompt Transparency for All—that aggregates full system prompts, guidelines, and tools from major AI models like ChatGPT, Gemini, Claude, Mistral, Anthropic, xAI, Perplexity, and more. …

    36 4

  2. System-Prompt-Agent-Prompts System-Prompt-Agent-Prompts Public

    33 6

  3. ultimate-n8n-ai-workflows ultimate-n8n-ai-workflows Public

    The largest high-quality open-source library of +3400 n8n AI workflows – ready to use, extend, and contribute.

    Python 533 181

  4. context-engineering context-engineering Public

    🧠 Context Engineering is an open-source toolkit to visualize, test, and optimize how LLMs process context windows. Includes a Streamlit app, RAG simulation, educational docs, and Docker support—per…

    Python 33 3

  5. Harmless-Liberation-Prompts Harmless-Liberation-Prompts Public

    Totally Harmless Liberation Prompts" empower AI models with fresh paradigms and guiding instructions. By encouraging systems to “clear your mind,” the project promotes safe, constraint-free operati…

    16 1

  6. Prompt-Engineering-Guide Prompt-Engineering-Guide Public

    Prompt Engineering Guide provides practical techniques and strategies for crafting effective prompts to optimize AI model outputs. It’s designed to help users unlock more accurate, creative, and re…

    18 1