Starting today, all paying API customers have access to GPT-4. In March, we introduced the ChatGPT API, and earlier this month we released our first updates to the chat-based models. We envision a future where chat-based models can support any use case. Today we’re announcing a deprecation plan for older models of the Completions API, and recommend that users adopt the Chat Completions API.

    • @[email protected]B
      link
      fedilink
      English
      11 year ago

      TL;DR: (AI-generated 🤖)

      On January 4, 2024, applications using stable model names for GPT-3 (ada, babbage, curie, davinci) will be automatically upgraded to new models (ada-002, babbage-002, curie-002, davinci-002). These new models will be accessible for early testing in the next few weeks. Developers using older completion models like text-davinci-003 will need to manually upgrade their integration by specifying gpt-3.5-turbo-instruct in their API requests. This new model is a drop-in replacement and will also be available for early testing. Developers wishing to continue using their fine-tuned models will need to fine-tune replacements on the new base GPT-3 models or newer models like gpt-3.5-turbo and gpt-4. Priority access will be given to users who previously fine-tuned older models for GPT-3.5 Turbo and GPT-4 fine-tuning. Support will be provided to assist users in transitioning smoothly. Developers who have used the older models will receive more information once the new completion models are ready for testing.

      Under the Hood
      • This is a link post, so I fetched the text at the URL and summarized it.
      • My maximum input length is set to 12000 characters. The text was short enough, so I did not truncate it.
      • I used the gpt-3.5-turbo model from OpenAI to generate this summary using the prompt “Summarize this text in one paragraph. Include all important points.
      • I can only generate 100 summaries per day. This was number 0.
      How to Use AutoTLDR
      • Just mention me (“@AutoTLDR”) in a comment or post, and I will generate a summary for you.
      • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
      • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
      • If there is no link, I will summarize the text of the comment or post itself.
      • 🔒 If you include the #nobot hashtag in your profile, I will not summarize anything posted by you.
  • 𝕊𝕚𝕤𝕪𝕡𝕙𝕖𝕒𝕟OPM
    link
    fedilink
    English
    1
    edit-2
    1 year ago

    TL;DR: (human-generated 👱‍♂️)

    • GPT-4 API access for all current paying customers
    • New instruction model: gpt-3.5-turbo-instruct
    • Deprecation of the Edits API
    • Deprecation of all older embedding models in favor of text-embedding-ada-002, but don’t worry:

    We recognize this is a significant change for developers using those older models. Winding down these models is not a decision we are making lightly. We will cover the financial cost of users re-embedding content with these new models. We will be in touch with impacted users over the coming days.

  • 𝕊𝕚𝕤𝕪𝕡𝕙𝕖𝕒𝕟OPM
    link
    fedilink
    English
    1
    edit-2
    1 year ago

    I’ve just tried the version of GPT-4 available via the API. There has been a lot of talk about GPT-4 being “nerfed” since the May update, and I wanted to see if I could see an obvious difference. I ran the same prompt in both the web-based ChatGPT with GPT-4 selected, and a self-hosted chat UI that uses the API. The API call used a temperature value of 0.7.

    Here is the prompt I used

    I use Nginx as a reverse proxy on my Ubuntu server. There is a web app running on localhost:3456, and I want to expose it to the internet using the hostname chat.botond.online. I also want to protect it using HTTP Basic authentication. I have the Apache tools for managing htpasswd files installed.

    Could you please write an Nginx config file for this site, and also explain how I can create a htpasswd file and use it in this nginx config to protect it?

    Here is ChatGPT's response

    Sure, I can certainly help with that. We will perform these tasks in two steps:

    1. Create an htpasswd file
    2. Write an Nginx config file

    Let’s start with the first one.

    Step 1: Create an htpasswd file

    You can create a .htpasswd file by using the htpasswd command that comes with the Apache httpd server. In your case, you can create a user (let’s say user1) like this:

    sudo htpasswd -c /etc/nginx/.htpasswd user1
    

    This will prompt you to enter and confirm a password for the user user1. The -c option tells htpasswd to create a new file.

    Please note: This will overwrite the file if it already exists. If you want to add additional users to an existing .htpasswd file, do not use the -c flag.

    Step 2: Write an Nginx config file

    Now, let’s create an Nginx configuration. You can add a new server block in your Nginx config, or create a new config file under /etc/nginx/sites-available/ and symlink it to /etc/nginx/sites-enabled/. I’ll create a new server block as an example:

    server {
        listen 80;
        server_name something.somewhere.com;
    
        location / {
            auth_basic "Restricted Content";  # This is the message that will be shown in the login prompt
            auth_basic_user_file /etc/nginx/.htpasswd;  # The file you created earlier
    
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    
            proxy_pass http://localhost:3456;
        }
    }
    

    This will set up basic HTTP authentication on your site and also reverse proxy requests to your web app running on localhost:3456.

    Note: If you’re planning to expose this to the internet, I would strongly recommend setting up SSL as well. You can use a free service like Let’s Encrypt to get an SSL certificate. If you don’t secure your site with HTTPS, the HTTP Basic authentication credentials will be sent in plain text, which is a significant security risk.

    Here is the response to the same prompt using the API

    Sure, let’s start by creating the htpasswd file for HTTP basic authentication. You can create a .htpasswd file using htpasswd command that comes with the apache2-utils package. If it’s not installed, you can install it via apt-get install apache2-utils.

    To create a .htpasswd file, run the following command:

    htpasswd -c /etc/nginx/.htpasswd username
    

    Replace username with the username you want. After running this command, you will be prompted to enter and confirm a password for this user. The -c flag is used to create a new file. Do not use this flag if you already have a .htpasswd file and just want to add a new user.

    Now, let’s create a Nginx configuration file. Using your favorite text editor (nano, vim, etc.), open up a new configuration file in /etc/nginx/sites-available/, for example:

    sudo nano /etc/nginx/sites-available/something.somewhere.com
    

    Then, paste the following configuration:

    server {
        listen 80;
        server_name something.somewhere.com;
    
        location / {
            auth_basic "Restricted Content";
            auth_basic_user_file /etc/nginx/.htpasswd;
    
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
    
            proxy_pass http://localhost:3456;
        }
    }
    

    In this configuration:

    • listen 80; means Nginx is listening on port 80 for incoming connections.
    • server_name something.somewhere.com; specifies the hostname.
    • auth_basic "Restricted Content"; enables HTTP basic authentication with the message “Restricted Content”.
    • auth_basic_user_file /etc/nginx/.htpasswd; specifies the location of the .htpasswd file.
    • The proxy_set_header directives are used to pass on some client information to the proxied server.
    • proxy_pass http://localhost:3456; is used to forward requests to your web app running at localhost:3456.

    Save and close the file.

    To enable this site, create a symbolic link of it in the sites-enabled directory:

    sudo ln -s /etc/nginx/sites-available/something.somewhere.com /etc/nginx/sites-enabled/
    

    Finally, test the configuration and restart Nginx:

    sudo nginx -t
    sudo systemctl restart nginx
    

    Now your web app should be accessible from the internet at http://something.somewhere.com, and it should ask for a username and password as you specified in the .htpasswd file.

    There is a marked difference between the two responses but as far as I can see, both are correct and high-quality. ChatGPT is less verbose, the API generates lots of detail. I prefer the API’s answer slightly.

    I will probably try the same experiment with much more complex problems (if anyone has a suggestion for a good test case, please tell me in a comment), but on this simple problem, they both performed very well.