Search Unity

  1. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice
  2. Unity is excited to announce that we will be collaborating with TheXPlace for a summer game jam from June 13 - June 19. Learn more.
    Dismiss Notice

Are any of you using AI to help you code?

Discussion in 'General Discussion' started by Dubious-Drewski, May 2, 2024.

  1. Dubious-Drewski

    Dubious-Drewski

    Joined:
    Aug 19, 2012
    Posts:
    52
    I'm pretty handy with C#, and I've built some things I'm proud of. I've also coded myself into corners and been unable to fix it.

    I bet AI could find my errors if I couldn't. Opinions? I'm looking all over these forums and seeing no one talk about this.

    NLPs especially fascinate me. Here's what ChatGPT just told me:

     
  2. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,437
    Yes, but it's immediate benefit is limited. You have to learn to write good prompts which includes learning when and how to change wording slightly to greatly increase the quality of your result, and how to identify hallucinations.

    Additionally the free tiers have very limited usefulness. In most cases if you don't want to pay for or are in a country where you can't access the paid AIs you shouldn't even bother trying to use it.

    https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

    We've discussed it multiple times. If I recall it was mostly around the release of GPT-4.
     
    Last edited: May 2, 2024
    Antypodish likes this.
  3. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    981
    Short answer is no, but it's fun to try and once in a blue moon it'll actually be useful.
    I've given it a bunch of tries. UIToolkit, general math, rotations, rendering TMP manually, mostly when I was stuck and desperate enough. Unity and math related questions are terrible, it's hallucinating too much. Total trash, a 100%.

    BUT, one time I asked chatgpt a very specific simd question and it gave perfect code.
    prompt was:
    "can you give me a simd version of packing a 128 length byte array into a 128 length bit array?"
    "i need to set the resulting bit to 1 only when the byte is 255"
    not even that great of a prompt but perfect simd code. that saved me quite some time. no idea what happened here, maybe someone asked the same thing on SO.
     
  4. zulo3d

    zulo3d

    Joined:
    Feb 18, 2023
    Posts:
    1,054
    I've only tried Google's Gemini. It seems to me that experienced programmers don't need AI and inexperienced programmers can't really trust it.

    Personally I like very compact code which the AI doesn't seem to be very good at writing. I get the impression that they take publicly available code and then modify it. I've seen some code from Gemini that looks very familiar but with the variable and class names changed. Or sometimes the code is just plain wonky.

    I recommend inexperienced developers stick to learning from the code that's on GitHub.
     
    angrypenguin likes this.
  5. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,437
    It's known to be pretty bad. :p
     
  6. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,253
    No, and a lot of that is because it's been getting progressively worse. Unity's offerings are laughable already, but a lot of existing options have been outputting worse and worse code, the kind where I'd have to spend more either doing loads of prompt massaging or just fixing it myself for it to be at all useful. It's still only truly good for boilerplate but the thing about boilerplate is that once I write it once I just save it and reuse it forever. I don't need AI to do that for me.
     
  7. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,814
    I had no issue with generative tools for code up to recent.
    For example ChatGPT I use only sometimes, rather rarely, for short snippets, of syntax, use cases, or methods name I forgot. For me faster than using search engine and sifting through forums, or stack overflow.
    Typically 1-15 lines max. Beyond that it is debugging story and is easier to write own code.

    For example I found solutions to specific Wwise implementaiton, which hasn't been clearly documented. I still don't know where information been taken from for training. But solution worked. There was a bit of halucination, but gave good pointers.
     
    marcoantap likes this.
  8. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,623
    In my opinion, you should not use AI chatbots to do your work. Because that is a borrowed skills, and while a chatbot is doing the coding, you do not grow and do not improve. Also if the service is ever shuts down, you'll be royally screwed.

    However, LLMs work fairly well as teachers, talking encyclopedia, temporary discussion partner, or to explain things. That is a valid usecase.
     
  9. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    936
    yeah plenty, I got the thing in Jetbrains. Helped debugged stuff I've never touched before that was giving me errors lmao.

    It's also great for when I need a script that's normally tedious to write like some editor thing.
     
  10. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,742
    Am using ChatGPT regularly to code smaller segments I'm too lazy for or for "out of the book" algorithms.
    I'd likely not be successful with this approach if I didn't have the skill to debug and understand the code however.
    Occasionally it also helps me find specific algorithms (and their names) after explaining a problem.
    It is not a perfect solution as others have mentioned, but 70% AI plus 30% brain are a powerful combo :D

    Tip: When it is stuck on editing a faulty piece of code without apparent progress, tell it: "Let's start fresh with a new approach please".
    That's quite succesful sometimes.
     
    Last edited: May 2, 2024
  11. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,121
    Github Copilot is sometimes nice as a fancier auto-complete, but other times gives complete nonsense, hard the evaluate if it it's a net positive since it does add to the mental load by having to read and verify snippets it provides.

    Paid ChatGPT is great for I have this piece of code doing a thing, but I also need it to do this other thing or two type of stuff. It's also decent at refactoring very messy code if you ask it to retain the same exact functionality.
     
  12. kdgalla

    kdgalla

    Joined:
    Mar 15, 2013
    Posts:
    4,669
    Just for fun I tried Bing chat about a year ago (I'd heard that ChatGPT was the best, but the ChatGPT sign-up website didn't accept my email at the time). Bing gave me some code that superficially resembled what the code that I asked for might reasonably look like. The code it gave me didn't actually do anything and a lot of the details made no sense on closer inspection. After that I kind-of lost interest in the whole thing.
     
  13. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,742
    You should revisit...
    Expecting perfection from the getgo is just not how it goes. If it did, we'd already be out of a job =D
     
  14. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    593
    I've found the Unity code results mixed, and typically not worth the time to debug. Using Bing and ChatGPT. What it really has been good for is very specific questions, for example regex requests to do something and it's spot-on.

    I also found it good at generating Gatsby GraphiQL code, which is very niche, but it seems to just work. Probably because it has fewer examples to sift through and fewer versions, compared to Unity.
     
  15. Trindenberg

    Trindenberg

    Joined:
    Dec 3, 2017
    Posts:
    412
    Stopped using ChatGPT ages ago. BingGPT, which also has a precise mode with CoPilot built in (also built into Win11 now) works very well. It's silly if you want it to think mathematically, its not a mathematician, but logic and structure it excels as. Optimizing, probably not it's best point either, but it can point you in the right direction, or show ways/functions you can use.
     
  16. marcoantap

    marcoantap

    Joined:
    Sep 23, 2012
    Posts:
    246
    I've found that Poe's Claude-instant is better at Unity, the popular LLM's are way too chatty. AI's provide that missing doc page or missing API function so we don't have to spend a day tinkering with small code, and then be able to focus on the zoomed out development of the project.
     
    DragonCoder likes this.
  17. VertexRage

    VertexRage

    Joined:
    Dec 18, 2018
    Posts:
    85
    I would split the use into 2 categories A) chatbot B) code completion. First - I do from time to time. Usually for simple 'throw away' type of scripts (e.g., go through all prefabs in selected dir and remove script that I forgot to remove; or add triangulate modifier to all selected objects in Blender). Good LLMs (like gpt4) are really good for such stuff. I can only guess that's because there is a lot of code that does this thing I want or something very similar.

    For B) I use Codeium. It is really great help in speeding up development. Surprisingly in my experience it's even more helpful in C++ (for Unreal) and PyQT code. I think the more boilerplate there is needed, the more such context-aware code completion is useful.

    One big disclaimer though: you have to know what you are doing in both cases.
     
    marcoantap likes this.
  18. FaithlessOne

    FaithlessOne

    Joined:
    Jun 19, 2017
    Posts:
    330
    At the moment I use Github Copilot. Its much better than Chat-Only AIs, because it has the context of your project and your code. It can auto-complete code directly while typing or even suggest blocks of code to finish your current programming step. Also can implement simple methods of its own when having some appropriate method signature and comments. It excels at refactoring of code, because it knows the former code at makes very good suggestions when you rewrite it. Repetitiv code and boiler plate generation is also quite good. But the more complex the programming task, the more failure. But sometime you get an idea even when the AI provided broken code and correct/complete it yourself.

    Github Copilot also has a chat where you can ask questions. In my opinion it is better for general programming questions than generalized ChatGPT/Bing Copilot, because specialized on coding, but still also fully able to hallucinate. What I like most about the chat is asking question about your project, like review class X please, can method Y be performance optimzed or ask about an error in a method and possibilties to fix it. These interactions can be quite awkward with Chat-Only AIs, because of posting code and character size limitations.

    So after all I find Github Copilot a useful tool with its pros and cons. It makes developers live more comfortable and easier, still you have to do the main work.
     
  19. JulianNeil

    JulianNeil

    Joined:
    Jun 27, 2022
    Posts:
    96
    I find that Github Copilot to be beneficial overall - enough to pay the small fee to use it, but hit and miss in specific instances.
    Occasionally its suggestions are spot on, and save me time. Sometimes they are close, and I use them and modify them. Usually I ignore them completely.
    I have found prompt generated code to be on the whole pretty poor quality, typically lacking any edge case handling, and often with small errors that are difficult to spot. But again, sometimes I am surprised.
     
  20. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,633
    Can't trust it, and will slow their own progression by using it. If they're skipping practice by getting a machine to provide answers for them, then they're not developing the skills to come up with the answers for themselves.

    Would we?

    If my understanding is correct, the way that these LLM-based systems work means that they're going to give modified versions of existing, published solutions. That's useful in a lot of contexts, but means that they're not going to be solving new problems for us. If an AI saves someone time elsewhere so they can put it into better problem solving, awesome. But I can't see these systems putting good programmers out of a job, because a good programmer can solve new problems for themselves.

    It's entirely possible that AI could reach the point where it does solve new problems, but my understanding is that this would require development in a different area to what we've seen in the last couple of years.
     
  21. bugfinders

    bugfinders

    Joined:
    Jul 5, 2018
    Posts:
    1,973
    I sometimes use co-pilot but sometimes rider/copilot/something just goes nuts, and either wont actually let me type, or undoes what i type.. so.. i get annoyed and turn everything off
     
  22. spiney199

    spiney199

    Joined:
    Feb 11, 2021
    Posts:
    8,151
    I haven't found AI stuff to be very useful, for coding or otherwise. You always have to verify the code it gives you, so you spend as much time reading the code as you would writing it yourself.

    Other integrated AI stuff like the one Adobe is peddling in Photoshop and similar have generally been useless at doing what I ask them to.

    And from what I've seen in some forum posts here, and other places, newbies trying to learn to code through ChatGPT and co doesn't seem to be a very good way to learn. More of a crutch than anything.
     
  23. zulo3d

    zulo3d

    Joined:
    Feb 18, 2023
    Posts:
    1,054
    Heck, I rarely feel comfortable using my own code that I wrote a year ago. :)
     
    marcoantap likes this.
  24. SunnySunshine

    SunnySunshine

    Joined:
    May 18, 2009
    Posts:
    988
    I use ChatGPT to an extent. Results are very hit or miss. Sometimes, it can nail an implementation on the first try, saving me a lot of time. Other times, it hallucinates utter garbage.

    While the code it writes is rarely directly usable, its idea are usually sound and something you can take inspiration from. But definitely DO NOT trust any code it generates.

    For C#, Python and other things that have a lot of code available, it does fairly ok. But if you ask it about anything else, -like lets say shaders for Godot - it's awful. Which isn't that surprising if you think about it.
     
    Ryiah likes this.
  25. Lurking-Ninja

    Lurking-Ninja

    Joined:
    Jan 5, 2024
    Posts:
    525
    I do not use any AI to aid my development. They are utter and hot garbage. What they would do on just the "well, that's just bad, but not utter garbage" is boilerplate. Which I have solved a long time ago. You write it once and then reuse the code all the time. It's less error-prone than the AI garbage.
    Although I'm fairly experienced so I don't need "inspiration" to write code. There is very few problems I can't get a hold of. But then I rather hit the books and other places where said problems discussed. AI can't help with those anyway.
     
    spiney199 likes this.
  26. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,742
    This is a bit much of an oversimplification. E.g. there's no problem in telling it to follow a name schema. Or even replace a variable name with something absurd you tell it to. So it definitely does not just assemble text it has found somewhere.
    You are right that it wont solve entirely new problems like, let's say, the software for a Level 5 automous vehicle.
    But isn't it a major staple in software dev that you break down large problems over and over into smaller ones?
    ChatGPT and co. is quite okay at this process and those smaller problems then have indeed been solved many times in the dataset. To stay in our field - do games really encounter new problems that cannot be broken down into simpler ones often?
    Now imagine if the system were "perfect"...
    Not that I believe that perfection is really achievable.
     
  27. andyz

    andyz

    Joined:
    Jan 5, 2010
    Posts:
    2,285
    I mean Visual Studio has intellicode which guesses your whole line of code as you type. This has been helping me for months in reducing typing!
    Chat GPT can give you a starter for something but better if you are going to understand everything it gives you to check it.
     
  28. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,623
    System like Nanite is an example of such problem being solved.

    Basically, angrypenguin worded it imperfectly, but he is correct in pointing out key issue. Current LLMs/Neural Networks do not create new, and instead rearrange the old. It is not quite "text it found somewhere", but "concept it found somewhere".
     
    angrypenguin likes this.
  29. Lurking-Ninja

    Lurking-Ninja

    Joined:
    Jan 5, 2024
    Posts:
    525
    Well, sorry to jump in the middle, but I don't think this is accurate description. In my opinion, it is "phrases it found statistically close together previously". It doesn't recognize "concepts", it recognizes patterns. As far as I understand these things.
     
    marcoantap, Ryiah and angrypenguin like this.
  30. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,742
    Makes sense. Though I bet a major part of software devs also just do that and their bosses do regard them as good software devs...
    Very very few work on something as revolutionary as Nanite.
     
  31. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,633
    You've oversimplified my description. :p I didn't say it reassembles text, I said it gives published solutions. It'll pull them apart and re-resent the info in its own way, and it's great at that. The point is that the way it modifies things are exactly what I'd expect from a thing called a Language Model.

    I mean, Nanite is a pretty outstanding example, but it's far from the only one. Games and graphics are a fields where people are pretty constantly pushing the envelope.

    But you're right, most software devs do get by just redoing already established stuff. Partly because most of the time that's sufficient or even desirable, but also partly because, I feel increasingly, that's what programmers these days aim for.
     
    DragonCoder likes this.
  32. retired_unity_saga

    retired_unity_saga

    Joined:
    Sep 17, 2016
    Posts:
    289
    everyone should use AI to code, but understand you should be able to fix the problem yourself.
     
    DragonCoder likes this.
  33. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,742
    Let's be realistic though, even on such a game that absolutely pushes the boundries, what percentage of the dev work actually does that? And how much in comparison is the same old "get the character to move just right", "create a performant swarm mechanic for flocks", "add an item inventory", "efficiently synchronize players across the network", "save/load the game state to disk" etc.?

    Reminds me of when I first dreamed of a career as a developer as a teen. I saw how there are thousands of libraries for "everything" and was imagining myself how I'll do things none else has done because software can just be copied, right?
    Reality really is different...

    AI might be the chance to actually allow the developer to focus solely on what's truly new, but we are still far from that. And I think at that point we might have less need for software devlopers overall.

    Hmm, it could also be that the more there already is out there, the harder it becomes it bring something new. Of course that should not be an excuse to not try.
     
    Last edited: May 3, 2024
  34. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,253
    Or I could not waste my time with prompt massaging and editing machine generated code because that tends to take up just as much if not more time than just doing it myself.
     
    angrypenguin and Lurking-Ninja like this.
  35. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,437
    Tokens to be precise. It's dependent on the LLM but a token can be as simple as a symbol or piece of a word to as complex as an entire word. Here's a screenshot of my script template passed to ChatGPT's token visualizer.
    upload_2024-5-3_12-2-37.png

    https://platform.openai.com/tokenizer

    Correct. It's a statistics engine. It's predicting what the next token will be in the current sequence of tokens. So if it sees "using UnityEngine;" it thinks the most likely candidate after "public class Foo : " is "MonoBehaviour". It will do this and other similar things even when I'm not asking for MonoBehaviours.

    upload_2024-5-3_12-18-20.png
     
    Last edited: May 3, 2024
  36. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,623
    I'll stand by my definition.

    The issue here is that you're thinking of a markov chain. While it is similar to a markov chain, it is not a markov chain.

    As far as I can tell it recognizes concepts, they're stored internally, then it uses those to form patterns. See:

    https://www.youtube.com/shorts/FJtFZwbvkI4 (word vectors, youtube short)

    Basically, you can ask ChatGPT to write a poem about cthulhu in lojban. It will oblige. Statistically, there is no poems in lojban about Cthulhu in its training data.

    se klani le xamsi cu se jivna
    le be'o drata ce xajmi prenu
    ctulxu, le prenu barda cnano
    no da pu klama ko'a no drata

    se mruli le slilu zbasu tcima
    ce se cimoi carna ce setca zunsna
    ko'a se cliva ce se krefu
    le barda jorne gunka kei ku

    ko'e jikru mo'u xusra le nu ganse
    ce se damba le nu se cmila xilmo
    no da kakne le nu jdika ko'a
    le ctulxu je'u klama ce le fagri xasnu

    lo selpla be do cu cizra, ko'e zmadu
    le ctulxu je'u klama cu bilga lo nu kakne
    no da ku zgana lo nu ko'a kelci
    le ctulxu je'u cliva ce le barda jorne kruca.
    ------
    From the depths of the sea he rises,
    A being ancient and despised,
    Cthulhu, the Great Old One,
    Whose power none can deny.

    With tentacles that writhe and squirm,
    And eyes that blaze like a fiery inferno,
    He awakens from his slumber,
    To reign supreme once more.

    His call echoes through the night,
    A sound that fills the heart with fright,
    For none can resist his power,
    When Cthulhu rises to devour.

    So beware, ye mortals, and take heed,
    For the awakening of Cthulhu bodes ill indeed,
    And none can escape his wrath,
    When he rises from the depths of his watery path.

    Long story short, while it is a statistical text engine, the claim that "phrases found together" is false. That describes markov chain. LLM is a huge lossily compressed body of text, but for a maximum compression you need to derive concepts from the text. That is likely the reason why you can "talk" to it.

    Also the reason why I spoke about concepts, is because in my experience it does have data about them. I dealt with multi-language conversation and translation. You can take one language and discuss its meaning in another, it'll be aware of very subtle details. That is a concept.
     
    Last edited: May 3, 2024
  37. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,633
    How long is a piece of string?

    But it’s still not the point. First, that small percentage is necessary, e.g. taking some approach described in a research paper and optimising it for real-time use. Being small is not the same as being unimportant. Second, it’s not just in the games themselves, it’s throughout the ecosystem.

    Even where innovations are small, those small innovations often build on other small innovations. It may be slow, and hard to see if you’re immersed in it, but it’s happening.
     
  38. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,742
    Fully agreeing with you. But what does that have to do with my claim that if AI with its current features were perfect, less software/game developers would be needed?
     
  39. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,623
    That claim is not very, well, useful. Effectively you're expressing your own opinion, but validity of that opinion is questionable. Current "AI" are LLMs which are inherently limited by their design. Even if you have a perfect LLM, it is still an LLM. To start talking about mass replacements of programmers, we'd need to talk about AGI, and we don't have that.

    There are opinions that reliance on tools like CoPilot decrease quality of the code. In this scenario LLM is a tradeoff, which increase long-term cost and is not a desirable thing.

    In the same fashion it is possible to claim that "if we had perfect developers, we'd need fewer of them". Because if your job consists of rearranging the old concepts, the implications here is that someone failed abstract them into a reusable higher-level framework everyone should be using, and now everybody is wasting time reinventing the wheel.

    Also, I do not recommend relying on LLMs for coding because that instead of a programmer makes you an LLM operator, and that's a different skill. If an employee job consists of reviewing of ai-generated boilerplate, then the reasonable thing for a business is to lower the salary or replace him with another person that costs less. As that is a less valuable skill. Actually, I'd say we're seeing this with images. Work of a painter costs significantly more than that of an image generator operator.
     
  40. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,633
    Well, look, you said "we'd already be out of a job". I don't think I'd be out of a job, and was assuming that others around here also would be good enough to keep theirs.

    I'm pretty sure I've said this in other threads here. I doubt that AI will kill programming as a profession, but absolutely expect that it will change the nature of the work and raise the skill floor. It'll also probably make the people who remain above the skill floor more productive by handling a bunch of the grunt work, but likely not as much as some people seem to think, because whether or not a human wrote the code, a human still needs to take the time to understand it.
     
    Ryiah, Antypodish and neginfinity like this.
  41. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,623
    I believe this is what happened with image generators.
     
  42. RoughSpaghetti3211

    RoughSpaghetti3211

    Joined:
    Aug 11, 2015
    Posts:
    1,716
    Use it to generate all the docstrings... seem to do ok with it
     
    Ryiah likes this.
  43. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,814
    Indeed.

    I personally doubt AI will antitime soon, or even ever will be able to replace programmers.

    The reason is, there is not enough samples for AI to train on.
    We got full images to train on, we got full books to train on.
    But we got only hand full of full functional and reliable software, that is open source, that generative tools can be trained on. And that not even touching problem, like software written in different programming languages. Different software versions. Different styles. And many more variables.
    Most of training source is from docs, books, and online snippets. Where not only quality is questionable in many cases, but most importantly, these are not discussing full software pipeline, from start to finish. Decompiling of software wont be any helpful.

    Best I can see as an example, Unreal simple FPS games magic button generator, to make a game. As there is many samples of FPS in such. Still, it will be as simple at best, as walking simulator, with few random enemies to shoot at. Mostly missing whole concept what is actually fun to play.
     
  44. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,437
    Agreed, but I do think it could eliminate employees that refuse to at least learn to make use of it in some fashion even if it is for the most basic of tasks. Maybe not for the kinds of work that we do but at companies where "clean code" leads to itty bitty functions a few lines each and then time spent typing up the test code.

    I remember some of the earliest ChatGPT videos of enterprise C# developers setting up test cases and then having the AI write the code based off of the test cases and vice versa, and then explaining what they thought of it. It was very interesting to watch.
     
    marcoantap likes this.
  45. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,121
    Sam Altman is calling GPT4 mildly embarrassing and the least advanced AI you'll ever use. Might be marketing, but might also be another disruption around the corner.
     
  46. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,253
    Literally everything Altman says about ChatGPT is marketing.
     
  47. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,437
    Meanwhile he's pretending that GPT-3.5 doesn't exist. :p
     
  48. marcoantap

    marcoantap

    Joined:
    Sep 23, 2012
    Posts:
    246
    The conspiracy theory is that GPT4 has been gradually watered-down for "security" and to lower the bar for next versions.

    On the jobs killing side, I have been able to ramp-up on technologies like frameworks and APIs I had barely any knowledge with in a couple of days instead of weeks searching docs and stackoverflow. So it's a multiplier of your know-what-ur-doin' skill. If it's <= 1 you will fall by LLM hallucinations. And that's the fence that saves jobs.
     
    Ryiah likes this.
  49. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,623
    Not sure about GPT4, but GPT3 certainly was. There was a huge difference between what ChatGPT could do in Jan 2023 compared to Jun 2023. Current version is pretty much brain damaged, although it still retains utility.
     
  50. retired_unity_saga

    retired_unity_saga

    Joined:
    Sep 17, 2016
    Posts:
    289
    I donno what everyone says or thinks

    but I just used AI to fix someone elses asset, and then it told me how to fix it in other ways, and then I was able to learn how to adjust the entire script to do it differently on top of it.

    AI is the way.
     
    DragonCoder likes this.