
jsonstein@masto.deoan.org ("Jeff Sonstein") wrote:
my initial take is that I’ve driven my per-request costs down by at least a factor of 50% (and possibly more, I need more data) by switching from gpt-4o to gpt-4o-mini
jsonstein@masto.deoan.org ("Jeff Sonstein") wrote:
my initial take is that I’ve driven my per-request costs down by at least a factor of 50% (and possibly more, I need more data) by switching from gpt-4o to gpt-4o-mini