Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: LlamaPReview – AI code reviewer trusted by 2000 repos, 40%+ effective (jetxu-llm.github.io)
2 points by Jet_Xu 65 days ago | hide | past | favorite
Hi HN! A month ago, I shared LlamaPReview in Show HN[1]. Since then, we've grown to 2000+ repos (60%+ public) with 16k+ combined stars. More importantly, we've made significant improvements in both efficiency and review quality.

Key improvements in recent month:

1. ReAct-based Review Pipeline We implemented a ReAct (Reasoning + Acting) pattern that mimics how senior developers review code. Here's a simplified version:

  ```python
  def react_based_review(pr_context) -> Review:
    # Step 1: Initial Assessment - Understand the changes
    initial_analysis = initial_assessment(pr_context)
    # Step 2: Deep Technical Analysis
    deep_analysis = deep_analysis(pr_context, initial_analysis)
    # Step 3: Final Synthesis
    return synthesize_review(pr_context, initial_analysis, deep_analysis)
  ```
2. Two-stage format alignment pipeline

  ```python
  def review_pipeline(pr) -> Review:
    # Stage 1: Deep analysis with large LLM
    review = react_based_review(pr_context)
    # Stage 2: Format standardization with small LLM
    return format_standardize(review)
  ```
This two-stage approach (large LLM for analysis + small LLM for format standardization) ensures both high-quality insights and consistent output format.

3. Intelligent Skip Analysis We now automatically identify PRs that don't need deep review (docs, dependencies, formatting), reducing token consumption by 40%. Implementation:

  ```python
  def intelligent_skip_analysis(pr_changes) -> Tuple[bool, str]:
    skip_conditions = {
      'docs_only': check_documentation_changes,
      'dependency_updates': check_dependency_files,
      'formatting': check_formatting_only,
      'configuration': check_config_files
    }

    for condition_name, checker in skip_conditions.items():
      if checker(pr_changes):
        return True, f"Optimizing review: {condition_name}"
        
    return False, "Proceeding with full review"
  ```
Key metrics since launch:

  - 2000+ repos using LlamaPReview  
  - 60% public, 40% private repositories  
  - 40% reduction in token consumption  
  - 30% faster PR processing  
  - 25% higher user satisfaction
Privacy & Security:

  Many asked about code privacy in the last thread. Here's how we handle it:  
  - All PR review processing happens in-memory  
  - No permanent storage of repository code  
  - Immediate cleanup after PR review  
  - No training on user code
What's next:

  We are actively working on GraphRAG-based repository understanding for better in-depth code review analysis and pattern detection.
Links:

  [1] Previous Show HN discussion: [https://news.ycombinator.com/item?id=41996859]  
  [2] Technical deep-dive: [https://github.com/JetXu-LLM/LlamaPReview-site/discussions/3]  
  [3] Link for Install (free): [https://github.com/marketplace/llamapreview]
Happy to discuss our approach to privacy, technical implementation, or future plans!



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: