• 1 Post
  • 2 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • it just predicts the next word out of likely candidates based on the previous words

    An entity that can consistently predict the next word of any conversation, book, news article with extremely high accuracy is quite literally a god because it can effectively predict the future. So it is not surprising to me that GPT’s performance is not consistent.

    It won’t even know it’s written itself into a corner

    It many cases it does. For example, if GPT gives you a wrong answer, you can often just send an empty message (single space) and GPT will say something like: “Looks like my previous answer was incorrect, let me try again: blah blah blah”.

    And until we get a new approach to LLM’s, we can only improve it by adding more training data and more layers allowing it to pick out more subtle patterns in larger amounts of data.

    This says nothing. You are effectively saying: “Until we can find a new approach, we can only expand on the existing approach” which is obvious.

    But new approaches come all the time! Advances in tokenization come all the time. Every week there is a new paper with a new model architecture. We are not stuck in some sort of hole.