Open Discussion about the social and ethical concerns of Vision-Language-Action (VLA) Models
Coding used to be the cornerstone skill for tech roles — but with the rise of LLMs, that's shifting. Critical thinking and multitasking are becoming the new core competencies.
There's also a behavioral shift worth noting, people are moving away from piecing together answers via Google and going straight to LLMs for speed. I think search engines will see significantly less traffic as LLM search capabilities mature.
My bigger concern now is VLAs (Vision-Language-Action models). If robots reach the performance level of today's top models like Claude or Gemini, could that trigger physical laziness the same way LLMs may have fueled mental laziness?
A few questions I'd love to hear your thoughts on:
- What do you think will be the core skills for tech roles in the future?
- Is it ethical to keep pushing VLA development, or does it risk becoming an Oppenheimer moment — where the technology, once misused (think robotic armies), leads to regret?
Note: Ironically, I used an LLM to rephrase the post :) (language barriers)