- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Coaching Changes
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Current A.I. explosion…
Posted on 4/1/23 at 9:53 pm to iPleadDaFif
Posted on 4/1/23 at 9:53 pm to iPleadDaFif
quote:
No man. It’s real. You can write 5-7 1000 word articles, feed it into ai, and tell it to adopt your writing style.
Once adopted, you can copy/paste random articles into it, and it will spit out rewritten articles that sound exactly like you wrote them yourself.
That shite is real, because I’m using it now. I’m simply curious to know how many average people (by average, I mean people not entrenched in DR Marketing) know about it.
And the ones that do, how do you feel about it’s use?
Reason I ask:
Figured no better place to get real opinions other than the hardest, most opinionated people I know on the www.
While it is true that AI can be used for text generation and can adopt a particular writing style, it is important to note that AI itself is not inherently harmful. The way in which AI is used is what can potentially lead to harm or misuse.
In the case of using AI for text generation, it can certainly be a powerful tool for content creation and can save time and effort in writing. However, it is important to ensure that the use of AI-generated content is ethical and does not involve plagiarism or deception.
Furthermore, it is important to remember that AI is not capable of creativity or original thought in the same way that humans are. While it can generate text that sounds like it was written by a human, it is still limited by the data and programming it has been given.
In summary, while the use of AI for text generation can have practical benefits, it is important to use it responsibly and ethically. It is also important to recognize the limitations of AI and not overestimate its capabilities.
Posted on 4/1/23 at 9:54 pm to theunknownknight
quote:
That’s not AI That’s advanced machine learning based off of complex indexing patterns
Interesting. Can ya elaborate on that?
Posted on 4/1/23 at 9:56 pm to TigerIron
quote:
It's not AGI. It is a form of AI.
So is a search engine if that’s the standard - so is Google maps.
Most people don’t know the difference between indexing, machine learning, specific AI, and general AI
I classify AI as something we, ultimately, won’t be able to control at all. What we have now is probably worse, a contrived AI that is controlled by big money/government being used to manipulate the masses
This post was edited on 4/1/23 at 9:57 pm
Posted on 4/1/23 at 9:57 pm to iPleadDaFif
quote:
Interesting. Can ya elaborate on that?
The distinction between AI and machine learning can be a bit blurry, but generally speaking, AI refers to the broader concept of creating machines that can perform tasks that would normally require human intelligence, while machine learning is a specific approach to achieving this goal.
Machine learning involves training a machine to learn from data and improve its performance at a specific task over time, without being explicitly programmed for that task. This is typically done using algorithms that can identify patterns and make predictions based on those patterns.
The process of generating text using the approach mentioned in the original statement - feeding in articles and training the machine to adopt a specific writing style - is an example of machine learning. Specifically, it involves using natural language processing (NLP) techniques to analyze the text and identify patterns in the data.
So, while the output may seem like it was generated by an AI system, it is actually the result of a specific type of machine learning. The machine is not making creative decisions or exhibiting human-like intelligence, but rather applying a set of rules and patterns that it has learned from the training data.
In summary, the process of generating text using complex indexing patterns is an example of advanced machine learning, but it does not necessarily involve true AI.
Posted on 4/1/23 at 10:16 pm to NPComb
If you're anywhere near the IT field, do what I did. I am NOT a programmer, I'm a network admin. I have asked chatGPT to write several *simple* programs in python /setup complex reverse proxy setups/ and write docker configs without know what the hell I was even asking. and they worked. These are things no google search could ever answer, and it was delivered in steps... Amazing and scary.
ETA: yes I agree it's stupid to argue over about this being *sentient* behavior. I don't believe this is a human robot. but there's no doubt in my mind after trying it that this is definitely a huge game changer on what machines can do.
ETA: yes I agree it's stupid to argue over about this being *sentient* behavior. I don't believe this is a human robot. but there's no doubt in my mind after trying it that this is definitely a huge game changer on what machines can do.
This post was edited on 4/1/23 at 10:21 pm
Posted on 4/1/23 at 10:43 pm to theunknownknight
quote:
So is a search engine if that’s the standard - so is Google maps.
All of this “GPT is just a search engine” talk feels like an attempt to bury our collective heads in the sand. GPT isn’t AGI, but it’s a hell of a lot more than a search engine.
The contention seems to be that GPT doesn’t actually “understand” its inputs or outputs - it’s just pattern-matching. That’s a silly contention for a few reasons:
1. Pattern-matching happens to be a pretty key part of how the human brain works as well.
2. True “understanding” is not a particularly easy thing to measure. Before we know it, we will be having the same debate about whether GPT-12 actually “feels” emotions or is just associating emotions with interactions based on its training. See where this is going?
These are questions of sentience, and the fact that they are being debated at all tells you how far the technology has come. At some point it doesn’t matter whether the machine truly understands, or feels, or whatever - we may never be able to know that any more than we can know whether another person sees the color blue the same way we do.
3. While GPT and other large language models don’t meet the generally-accepted criteria for AGI today, it’s not hard to imagine how LLMs are a huge step in that direction. There are signs that GPT is already capable of determining fairly complex cause-and-effect relationships using the same pattern-matching logic that people criticize. It’s already teaching itself new skills without being specifically trained for them, such as when it learned to identify positive and negative connotations without being told to do so.
All of these things represent a lot of “the hard part” with respect to AGI. People seem to be judging LLMs in a vacuum, in the context of search engines and chatbots, without thinking about what these abilities will mean once expanded and connected to other systems. It’s like looking at an internal combustion engine and saying “yeah, but all it does is turn a shaft.”
quote:
I classify AI as something we, ultimately, won’t be able to control at all.
That sounds like a pretty unconventional way to define it.
Anyway.. /rant
Popular
Back to top


0






