Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue using Gemini Models #794

Closed
zacsims opened this issue Jan 8, 2025 · 3 comments
Closed

Issue using Gemini Models #794

zacsims opened this issue Jan 8, 2025 · 3 comments
Labels
bug Something isn't working

Comments

@zacsims
Copy link

zacsims commented Jan 8, 2025

Hello,

I am trying to use the Gemini models with the latest version of paperqa like so:

import os
from paperqa import Settings, ask
from paperqa.settings import AgentSettings

GEMINI_API_KEY = os.getenv('GEMINI_API_KEY')
llm = 'gemini/gemini-1.5-flash'

answer_response = ask(
    "What is Paper-QA?",
    settings=Settings(
        agent=AgentSettings(agent_llm=llm),
        llm=llm,
        summary_llm=llm,
        embedding="gemini/text-embedding-004"
    )
)

However, I am getting this error in response:

BadRequestError: litellm.BadRequestError: VertexAIException BadRequestError - {                                                                                                                                          
             "error": {                                                                                                                                                                                                             
               "code": 400,                                                                                                                                                                                                         
               "message": "* GenerateContentRequest.tools[0].function_declarations[0].parameters.properties: should be non-empty for OBJECT type\n* GenerateContentRequest.tools[0].function_declarations[1].parameters.properties: 
           should be non-empty for OBJECT type\n",                                                                                                                                                                                  
               "status": "INVALID_ARGUMENT"                                                                                                                                                                                         
             }                                                                                                                                                                                                                      
           } 

Attached is the entire output log.

error.txt

@dosubot dosubot bot added the bug Something isn't working label Jan 8, 2025
@derspotter
Copy link

derspotter commented Jan 8, 2025

I had a similar problem, but only when using Gemini models as agent llm. As llm for answering they were fine. Right now I am using gemini-2.0-flash-thinking-exp as answer llm and it works perfectly. Just use claude, deepseek or openai as agent llm and you should be fine.
this is what i am using right now:
pqa --settings high_quality
--llm gemini-2.0-flash-thinking-exp
--llm_config '{"model_list":[{"model_name":"gemini-2.0-flash-thinking-exp","litellm_params":{"model":"gemini/gemini-2.0-flash-thinking-exp"}}],"rate_limit":{"gemini-2.0-flash-thinking-exp":"4000000 per 1 minute"}}'
--summary_llm deepseek-chat
--summary_llm_config '{"model_list":[{"model_name":"deepseek-chat","litellm_params":{"model":"deepseek/deepseek-chat"}}]}'
--agent.agent_llm claude-3-5-sonnet-20240620
--agent.agent_llm_config '{"model_list":[{"model_name":"claude-3-5-sonnet-20240620","litellm_params":{"model":"anthropic/claude-3-5-sonnet-20240620"}}],"rate_limit":{"claude-3-5-sonnet-20240620":"40000 per 1 minute"}}'
ask "What is the problem with capital mobility?"

my problem is that it always indexes anew and sometimes that runs into rate limits

@zacsims
Copy link
Author

zacsims commented Jan 8, 2025

@derspotter That makes sense thanks for the suggestion.

@jamesbraza
Copy link
Collaborator

jamesbraza commented Jan 8, 2025

Hi all, this was an interesting issue. I logged a separate issue #796 with the underlying problem.

Just use claude, deepseek or openai as agent llm and you should be fine.

Yes this is the workaround for now, just use another LLM provider that supports empty tool parameters for the agent LLM. The other LLMs (llm, summary_llm) are not impacted by this (since they don't make tool selections).

Thanks for the issue report @zacsims and your help @derspotter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants