3 Comments
May 11, 2023Liked by Laura

Great writeup, thanks for sharing your process! I'm glad to hear other folks are focusing on latency and waiting time. I've found it helps my patience to use OpenAI's async streaming API. The chat and chat completion endpoints accept a "stream": True argument, and with some work you can incrementally lay out pages for impatient readers. hth, good luck!

Expand full comment