They’re still trying to sell you the benefits of a subscription and won’t offer you unlimited usage free of charge if I’m not mistaken.
I’m definitely not studying their TOS for their freemium model. Didn’t investigated it further, as the “sign up & log on here” made me turn away. Sure, my comment on the part that it requires a login to use their services might be a misconception from my side.
Still skeptic in terms how the conversations / requests would be used on their side.
Comparing energy consumption between perplexity and duckduck for example is unfair. It compares apples and peas. perplexity is answering complex questions which duckduck doesnt. With classical search engines you need to dig deeper into the search reslts and eventualy search further before you find a proper answer, which consumes more energy.
Example question: I have a list of ip addresses in $all_IP. How can I find out how often each ip address is in that list?
perplexity gives you a useful answer right away: printf '%s\n' $all_IP | sort | uniq -c | sort -nr
If you ask this to a classical search engine without AI, like duckduck, you need to do a lot more research, consuming a lot more energy until you get to the answer.
This is just a simple example. The difference becomes more obvious the more complex the question is.
I’m using it a lot too (for free - no user account). As @mbod says “it shows the web sources it used to create the answer” so you can check that it’s accurately presented the information from that source, and that the source is worthwhile/believable.
It’s a good starting point, and easily leads you to follow up questions and sources.
I enabled it in the sidebar in Firefox back when it was only a hidden setting.
Unfair, I don’t know. But I would say it is relevant and important to keep in mind.
Besides, I already said:
However, according to perplexity.ai itself, it consumes 10 to 100 times more energy for its searches.
Now that becomes quite relevant when it is included now for example in Firefox and most likely will be used for not very sophisticated and complex search tasks like the sort:
Where is the best Taco served in New York?
I may be wrong though.
At any rate, personally, I will only make use of such models rarely and only when I can assure myself that it is more “cost efficient” than doing all the searches with “traditional” search engines.
A key take away could be: “Guy’s, lets be responsible and don’t use chat models as our new first choice of searching the web for the trivial stuff.” to phrase it simply.
A chat model that would perform web searches under the hood to aggregate it’s findings across multiple conventional search engines would be pointless and a waste of resources. In case the trend drifts towards the replacement of traditional search engines in favor of AI-featured services, the interference providers definitely have to be better than “25 times worse”.
There`s no doubt that the efficiency in answering complex question can’t be a be matched that easily. If it is used that way explicitly. It’s more the question: Does the mass already prefer the convenience and ease of use of those AI-driven features? And how can we, as a society, prevent that our energy consumption doesn’t increase faster than we could keep up with.
User statistics from the large AI providers would definitely be interesting. Just to see how the chat models are being used for and how improvements could be made. I’m definitely aware of the fact that researchers (some, hopefully) are actively investigating AI usages and the end users behavior. But sadly google and their competitors won’t hand over data sets that easily to allow quantitative user studies at a representative scale.
A subscription model that would penalize trivial questions by a price increase could be a step towards “proper” tool usage, if you know what I mean