Docs · Choosing a model
Which model should you use?
PearPie gives you a choice between local, peer-to-peer, and premium models on the PearPie Network. Here's a friendly cheat sheet for picking the right one.
In this guide
Local models
Local models live on your computer. Once you download one, your messages stay on the machine. There's no internet round-trip when you press send. They're free and work offline.
The trade-off is hardware. PearPie ships two local models: Gemma 4 runs on most machines (RAM is the main requirement, no GPU needed), and Qwen 3.5 unlocks if you have a GPU and gets noticeably faster and more capable on complex reasoning. More open-source options on the way.
Best for: everyday questions, casual brainstorming, anything you'd want to keep entirely on your own machine.
Peer-to-peer (your other devices)
If you've linked another device, you can ask your phone to use a model running on your home desktop, or vice versa. The chat goes directly between the two devices over an encrypted connection, with no central server in the middle.
This is the sweet spot when you want a bigger model than your current device can run, but you don't want to use cloud credits. It only works when the other device is on and connected.
Best for: getting more power out of the device you happen to have on you, without paying or going through a cloud provider.
Premium models (PearPie Network)
The PearPie Network is for models you can't or don't want to host yourself. Some models are closed-source (Claude is the obvious example) and can only be accessed through a hosted provider. Others, like larger DeepSeek and Mistral builds, are open-source but heavy enough that running them locally isn't practical for most people. Underneath, the PearPie Network uses the same peer-to-peer mechanism as connections between your own devices: PearPie joins your network as a private peer that runs those models on European infrastructure. Your message is discarded after the reply. PearPie doesn't store the request, and providers retain nothing permanently.
Premium models cost credits. They're worth it when you want the absolute best response on a hard question, a long document, or a precise piece of writing. They're also useful if your hardware just isn't up to running a local model that's good enough for what you're trying to do. See pricing →
Best for: harder problems, longer or more nuanced conversations, anything where you want the strongest available answer, or anytime your local hardware can't keep up.
Quick cheat sheet
If you're not sure, this is the rough order to try:
- Casual question? Local model. Fast, free, completely private.
- Phone but want more power? Use your linked desktop's model.
- Local model too slow or struggling? Premium model. Same goes for older or lower-spec hardware that can't run the bigger local options.
- Tricky problem or long document? Premium model. Worth a credit or two.
You can always switch mid-conversation if a local model is struggling. The model picker is right there at the top of the chat window.