-
Notifications
You must be signed in to change notification settings - Fork 666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue using Gemini Models #794
Comments
I had a similar problem, but only when using Gemini models as agent llm. As llm for answering they were fine. Right now I am using gemini-2.0-flash-thinking-exp as answer llm and it works perfectly. Just use claude, deepseek or openai as agent llm and you should be fine. my problem is that it always indexes anew and sometimes that runs into rate limits |
@derspotter That makes sense thanks for the suggestion. |
Hi all, this was an interesting issue. I logged a separate issue #796 with the underlying problem.
Yes this is the workaround for now, just use another LLM provider that supports empty tool parameters for the agent LLM. The other LLMs ( Thanks for the issue report @zacsims and your help @derspotter. |
Hello,
I am trying to use the Gemini models with the latest version of paperqa like so:
However, I am getting this error in response:
Attached is the entire output log.
error.txt
The text was updated successfully, but these errors were encountered: