Frequent hallucinations!!!
I have been trying out Lumo's free version for a few days and the hallucinations are frustratingly frequent. I understand why they do not disclose which models are handling queries or allow users to select specific ones, likely to streamline usage for simpler tasks. But even basic questions can trigger hallucinations and the model often fails to retain context from the previous sentence. The hard limit of five web searches per query does not help either. So far my experience has been underwhelming. I'm not sure if Lumo+ improves things but at the very least Proton should be more transparent about which models are in use and ensure more consistent model selection.