GPT-4o mini vision pricing is odd

Sorry if someone's posted this before but I couldn't see anything.

I find it a bit strange that OpenAI have made their GPT-4o mini functionally the same as the non-mini model for vision, by making each "image tile" more tokens in the mini vs the original 4o model.

https://openai.com/api/pricing/

GPT-4o:
150 x 150px image = 255 tokens (155 + 85 base tokens)
255 tokens = US$0.001275

GPT-4o mini:
150 x 150px image = 8500 tokens (5667 + 2833 base tokens)
8500 tokens = US$0.001275

I had a bit of a fun project in mind which would compare images, so I was super excited about a really cheap model (especially with their batch 50% discount) but it's a bit dissapointing that the discount doesn't carry over to images.

In contrast, Anthropic just use the formula `tokens = (width px * height px)/750` and charge you the corresponding model's rate for the tokens, and for now Haiku is nearly 10x cheaper per image than 4o mini.

Note:
I did test that this isn't an error on their page, I compared two small images and got the following response. CompletionUsage(completion_tokens=13, prompt_tokens=17128, total_tokens=17141)

Edit: Seems like it's official, there's a tweet from OpenAI acknowledging it https://x.com/romainhuet/status/1814054938986885550?t=AMFK4svMvCluYqAXUqRDMQ&s=19