Hacker Newsnew | past | comments | ask | show | jobs | submit | mika6996's commentslogin

Then recommend a better explanation?


But you can't just switch between installed models like in ollama, can you?



You sure this works? inline_completion and chat_panel give me "Property inline_completion is not allowed." - not sure if this works regardless?


I really don't know, I had asked chatgpt to create it and earlier it did give me a wrong one & I had to try out a lot of things and how it worked on my mac

I then pasted that whole convo into aistudio gemini flash to then summarize & give you the correct settings as my settings included some servers and their ip's by the zed remote feature too

Sorry that it didn't work. I um again asked from my working configuration to chatgpt and here's what I get (this may also not work or something so ymmv)

{ "agent": { "default_model": { "provider": "ollama", "model": "hf.co/sweepai/sweep-next-edit-1.5B:latest" }, "model_parameters": [] },

  "ui_font_size": 16,
  "buffer_font_size": 15,

  "theme": {
    "mode": "system",
    "light": "One Light",
    "dark": "One Dark"
  },

  // --- OLLAMA / SWEEP CONFIG ---
  "openai": {
    "api_url": "http://localhost:11434/v1",
    "low_latency_mode": true
  },

  //  TAB AUTOCOMPLETE (THIS IS THE IMPORTANT PART)
  "inline_completion": {
    "default_provider": {
      "name": "openai",
      "model": "hf.co/sweepai/sweep-next-edit-1.5B"
    }
  },

  //  CHAT SIDEBAR
  "chat_panel": {
    "default_provider": {
      "name": "openai",
      "model": "hf.co/sweepai/sweep-next-edit-1.5B"
    }
  }
}


What would tinygrad replace if they continue to proceed like this?


Potentially PyTorch and Tensorflow.


I think it has great potential for deployments on edge systems.


It is already used in comma.ai’s openpilot hardware


But that is an inside deal - same founder, I believe


Eating your own dogfood is good validation.


Who likes this bullshit of ads anyway?


Did you try this method on any model? What do benchmarks say?


Honest answer: I tested it on GPT-2 (124M) and the results are mixed. The mathematical claims hold up. I ran 58 tests covering ternary matmul correctness, memory compression, and numerical stability. The 16x compression works, the zero-multiplication property is verified, and the epistemic layer correctly abstains on high-entropy distributions. What does not work is post-training quantization. When I quantized GPT-2's weights to ternary and ran generation, the output was garbage. This is expected because the model was never trained with ternary constraints. BitNet gets coherent output because they train from scratch with ternary baked in. I did not do that. The actual novelty here is not the quantization itself but the epistemic output layer that treats the ternary zero as "I do not know" rather than just sparsity. My tests show it correctly abstains on future predictions and impossible knowledge while answering factual queries confidently. But I should be clear that these tests use designed distributions, not outputs from a trained model. I do not have the compute to train a ternary model from scratch, so coherent generation remains theoretical. The code is at github.com/Zaneham/Ternary_inference if you want to poke at it. Happy to be proven wrong on any of this. tl:dr yes it works but current models aren't made for it. The most interesting thing is the llm can say when it doesn't know.


Somebody with an archive of this article?



I really love the recommended background music while reading this article. More blogs should adopt that - really quirky.


Does anybody really think this is a plausible theory?


as much as I do when some single author posts they have proved P=NP


Also, this has not been published in a peer review journal. Not everything that is published in a peer review journal is true, but it's a minimal filter.


One might say the peer review is a trust signal, and is one of the many signals used to evaluate scientific reaults


Which LLM is running with Harper?


None


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: