How to Build and Launch an AI ChatGPT SaaS Product in Hours Using Fast2build?

(Updated: Tuesday, April 2, 2024)

AI products are the trend of the future, and many people are trying to create their own online AI products. Here, we will detail how you can build your own online, customized AI ChatGPT in just a few hours using Fast2build.

Once you become a member on Fast2build, you can access the project Starter Boilerplate, download the code to your local machine, and then launch the project via the command line.

1. Start the initial run

First, make sure you have Node.js - v18.0.0 or newer installed on your local machine.

Execute the following command in the project root directory to install dependencies:


Then, execute the command to start the project:

yarn dev

2. Configure the Landing Page

Create membership fast2build

To quickly generate a personalized homepage, Fast2build includes many built-in components, such as: Banner, CallToAction, HowItWork, Newsletter, FAQ, and dozens more. As the platform continues to optimize, the number of components will also increase.

You need to open the app.config.ts file and configure the components you need. Refer to this part of the documentation for components

home: {
      layout: [
          name: 'FastHeroStandard',
          config: {
            title: "Launch Your SaaS Product In Hours",
            description: "Code Less, Launch Fast. Make it easier to monetize small products and quickly launch online",
            buttons: [
              {"name": "Get Start", href: "/"},
              {"name": "how to start", href: "/doc", type: "link"},
            image: {
              src: "/image/sys/feature/fast2build-fetaure.png"
          name: 'FastFeatureVideo',
          config: {
            "title": "Build your SaaS",
            "description": "All-in-one solution to manage your SaaS business.",
            "list": [
                "layout": "left",
                "hightlight": {
                  "icon": "/svg/sys/setup.svg",
                  "text": "Start",
                  "color": "#d81e06"
                "text_main": "Simple setup",
                "text_description": "Download the source code and use the command line to start",
                "video_main": "/video/setup_init.mp4",
                "layout": "right",
                "hightlight": {
                  "icon": "/svg/sys/login.svg",
                  "text": "Config",
                  "color": "#f5a33e"
                "text_main": "User login",
                "text_description": "Built-in integration of Google Login, Magic Login, Email Login",
                "video_main": "/video/login.mp4"
                "layout": "left",
                "hightlight": {
                  "icon": "/svg/sys/payment.svg",
                  "text": "Online",
                  "color": "#d4237a"
                "text_main": "Stripe/Paddle Payment",
                "text_description": "Quickly get online payment with your personal Stripe/Paddle token",
                "video_main": "/video/payment_online.mp4"

3. Supabase Database

First, you need to have a Supabase account. Then, in the SQL Editor, add membership and order tables to store user membership information and purchase orders. Also, set up the corresponding RLS policies. For implementation details, refer to: Database

Configure the SUPABASE_URL and SUPABASE_KEY obtained from Supabase into the environment variables in the .env.local file in your project directory.


Create membership fast2build

4. Account Login

To implement Google login and Magic link login, you need to create a project on Google Cloud. Refer to the documentation for instructions: Google Oauth

Copy paste the Client ID in GOOGLE_ID and Client Secret in GOOGLE_SECRET to your Supabase dashboard (Authentification > Providers > Google)

Account login coinfig fast2build

5. Configure Stripe

Stripe config

Implement online payments with Stripe, which offers both development and production modes. You can test your product in development mode and switch to production keys once you're ready to go live.

There are two parts to configure in this section, for detailed instructions refer to the documentation: Payments Section


Used in conjunction with the Pricing component module for configuring prices during checkout. This needs to be set up in the Stripe dashboard.

Create a new product and copy the price ID (price_xxx) into INIT.STRIPE.payment.price_one in the server/config/sys.js file.

const INIT = {
    STRIPE: {
        source: "fast2build", // the website host
        price1: {
            "id": "price_xxxx", //test
            "type": "one-time",
            "quantity": 1

Environment Variables

STRIPE_SECRET_KEY is required for integrating the Stripe SDK and enables you to call official Stripe API methods from the backend.

STRIPE_ENDPOINT_SECRET is used for listening to Stripe callbacks, verifying the legitimacy of webhook calls, etc.

Copy your private keys and add them to STRIPE_SECRET_KEY in .env.local

Copy the signing secret and add it to STRIPE_ENDPOINT_SECRET in .env.local


6. Backend API Supports OpenAI Streaming Requests

Stripe config

This part involves developing personalized business functionality. First, we need to implement calls to the OpenAI interface in the backend. This is relatively simple and only requires integrating the OpenAI SDK.

First, install two dependencies openai and ai.

In server/config/sys.js, configure the OPENAI_API_KEY.

Create a new file server/api/chat/, with the following code:
try {

    // Extract the `prompt` from the body of the request
    const { messages, model, temperature, top_p } = (await readBody(event)) as {
      messages: ChatCompletionMessageParam[];
      model: String;
      temperature: Number;
      top_p: Number

    console.log("before:", messages, model, temperature, top_p)

    // Ask OpenAI for a streaming chat completion given the prompt
    const response = await{
      stream: true,
      messages: messages,
      model: model || 'gpt-4-1106-preview',
      temperature: temperature || 1,
      top_p: top_p || 1

    // Convert the response into a friendly text-stream
    const stream = OpenAIStream(response);

    // Respond with the stream
    return new StreamingTextResponse(stream);
} catch (e: any) {

7. Front-end Interactive UI Implementation

build chatGPT preview

The front-end pages are mainly implemented using Vue3, and the styles are achieved with TailwindCSS. Since Fast2build has built-in integration with the Element-plus UI library, many UI components like Button, Input, Dialog, Slider, etc., can be used directly.

Modify the Dashboard.vue page. The layout uses Flex layout, with "navigation" on the left and the "chat" area on the right.

  • Supports Markdown syntax and code highlighting
  • Local storage of chat messages
  • Quick copy of content
  • Supports ChatGPT-4
  • Customizable system messages, temperature, etc.

Below is the main request interface, sending requests, and handling the streaming information returned by the API.

const generate = async (session: Session, promptMsgs: Message[], targetMsg: Message) => {
    generateProcess.value = true
    const response = await fetch(`/api/chat/completions`, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      body: JSON.stringify({
        messages: promptMsgs,
        systemMessage: settingConfig.value.systemMessage,
        temperature: settingConfig.value.temperature,
        top_p: settingConfig.value.top_p,
        model: settingConfig.value.model

    if (!response.ok) {
      throw new Error(response.statusText)

    // This data is a ReadableStream
    const data = response
    if (!data) {

    const reader = data.body.getReader()
    const decoder = new TextDecoder()
    let done = false

    let lastMessage = ''

    while (!done) {
        if (!generateProcess.value) {
        const { value, done: doneReading } = await
        done = doneReading
        const chunkValue = decoder.decode(value)
        lastMessage = lastMessage + chunkValue
        const created = new Date().getTime()

        for (let i = 0; i < session.messages.length; i++) {
            if (session.messages[i].id === {
                currentConversation.value.messages[i] = {
                    content: lastMessage,
    if (generateProcess.value) {
        await updateConversationList()

8. Deployment and Launch

Deploy and launch

Once the basic and business functionalities have been debugged locally, you can deploy to an online hosting service. There are many online services that support Nuxt.js deployment, such as Vercel, AWS Amplify, Netlify, and others.

If deploying to Vercel, deploy using Git:

  • Push your code to your git repository (GitHub, GitLab, Bitbucket).
  • Import your project into Vercel.
  • Vercel will detect that you are using Nitro and will enable the correct settings for your deployment.
  • Your application is deployed!

Remember to update your online environment variables to production settings.

With that, the AI ChatGPT product is successfully launched.

By following the steps above, you can launch your own online AI ChatGPT SaaS product in just a few hours, complete with a basic website, account system, payment system, and membership services. The AI ChatGPT part is personalized and developed by you. The key point is that all of this is basically free, with the only cost being the purchase of a domain name. Similarly, if you want to create a different type of product, you only need to personalize and develop the business part.

This way, you can fast launch your product for users to try, achieving the goal of rapidly validating your ideas. With Fast2build, your ideas can be quickly realized, launched, and validated.

Related posts

© Copyright 2024 fast2build