6
17 Comments

Have you tried generating your whole app with GPT?

Wasp Lang is a declarative full stack framework https://wasp-lang.dev/ and we are experimenting with giving you a "almost finished" starter based just on your prompt

    1. 1

      How did the process go for you? I suppose some tweaks for the design needed to happen or?

      1. 1

        trial and error. i can code too so i mainly felt like a project manager i would make small tweaks to your point but it was super useful with api integrations so i didn't need to read the docs

    2. 1

      this is awesome design is slick

  1. 3

    This is really interesting! I'd bet the DSL angle really does help the initial accuracy.

    1. 2

      agreed! I'm not aware of any other frameworks that leverage a DSL, are you?

  2. 2

    Boiler plate code has been an interesting market, interesting twist using AI.

  3. 1

    swyx has something interesting here: https://github.com/smol-ai/developer/

    it's an interesting tool to scaffold out, generate, then iterate on code with a 'junior developer' ai

    1. 1

      Have you tried it for e.g. React apps? 🙂

  4. 1

    i built a mini autogpt that could do that but my problem was always that the context was too small to fit sometimes one page of code. Eventually he looses track of what was already built. I had to always give him context of what existed(give him imports he might need) and the folder structure for him to more accurately code without hallucination.

    1. 1

      Aw yes, that's the biggest issue for me as well. Forgetting what it already has and not making foolish imports.

      For example, Wasp has the concept of entities, operations and pages. So I would generate the entities first (Prisma models) and then when it generates operations I add to the prompt "You have these entities available: ...". And then when it generates the frontend pages, I let it know which actions it has access to e.g. "getHabits" or "addNewHabit". It helps a quite a bit.

      1. 1

        if you have time you could maybe checkout other LLMS(Claude with 100k tokens for example) that have big context sizes. You could maybe use it to keep track of what you currently have and filter it out that way to then pass it to GPT-4. It might give a more accurate result.

        I think by combining 2 LLMs one for managing what you have and the other to generate the content you'd get a better end result. Otherwise you're always limited with a small context window and will have more hallucinations.

        Eventually using the 32k context of GPT-4 will be too expensive and people will lookout at other LLMs that are cheaper.

  5. 1

    This is quite fun! I was also experimenting with this, I had trouble with GPT4 due to how slow it is, but GPT3.5 was not so bad. Main challenge was ensuring I don't consume too much context, or it would forget what I told it and start making stuff up.

    1. 1

      I guess the trick is to figure out a way to be exact enough while letting ChatGPT do its thing, stuff like providing very precise instructions for pieces of the app 🤷‍♂️

Trending on Indie Hackers
Passed $7k 💵 in a month with my boring directory of job boards 39 comments Reaching $100k MRR Organically in 12 months 32 comments 87.7% of entrepreneurs struggle with at least one mental health issue 14 comments How to Secure #1 on Product Hunt: DO’s and DON'Ts / Experience from PitchBob – AI Pitch Deck Generator & Founders Co-Pilot 11 comments Competing with a substitute? 📌 Here are 4 ad examples you can use [from TOP to BOTTOM of funnel] 10 comments Are you wondering how to gain subscribers to a founder's X account from scratch? 9 comments