Discussion about this post

User's avatar
algobaker's avatar

I find these discussions anyway very dull, because they lack all the nuance of exactly what you want to achieve with the system. AGI is such a broad term. Debating in general whether scaling up LLMs or doing [something else] is like debating whether chips or ice cream is a better food.

If the criteria is this vacuous notion of 'general intelligence' you can argue for whatever approach you like because nobody has a clue. Much more interesting to think of specifics like "how do we improve our capabilities in proving theorems?" or "how do we design better experiments in life sciences" which at least have some level of concreteness to them

Expand full comment
1 more comment...

No posts