u/kerkerby

▲ 47 r/ZedEditor+2 crossposts

It's been a while since I saw these kind of response from AI. Context: working on a project with Zed IDE and Mistral through ACP, in the process I shared a creative solution to a problem because I find the current approach was stiff and likely going to be hard to test and maintain and future iteration will likely introduce regression.

u/kerkerby — 9 days ago

Hi, I'm curious is anyone had the chance to use voice cloning with Voxtral, I was able to do TTS with https://huggingface.co/mlx-community/Voxtral-4B-TTS-2603-mlx-4bit on a M2 with tiny 8GB RAM but was not able to successfully do voice cloning. There are Github Projects that suggest how to do it by training https://github.com/rhulha/vokstral-voice-clone but training on M2 is impossible, and without the pre-trained weights, we're stuck at default voices.

u/kerkerby — 10 days ago

I am currently weighing dart:web and shelf for a new production-grade web project. Coming from a Java/Spring Boot background, my intuition is leaning heavily toward Dart.

The developer velocity feels significantly higher, and I love that Dart gives me that structured, type-safe Java feel without the heavy boilerplate or the friction of JavaScript.

However, before I commit fully, I would appreciate to hear from those who have actually maintained Dart web or server apps long-term.

Specifically:

  1. Performance at Scale:
    How does shelf handle high-concurrency compared to something like Spring Boot or Go? Are there specific bottlenecks you have hit?

  2. The Ecosystem Gap:
    What are the missing pieces you have encountered? For example, specific DB drivers, middleware, or auth libraries that are not as mature as the Java ecosystem.

  3. Maintenance and Debugging:
    How is the day 2 experience? Are you finding the deployment pipelines and debugging tools, especially for dart:web, to be reliable for production?

  4. The Gotchas:
    Is there anything you wish you knew before moving away from similar traditional stack?
    I am sold on the productivity, but I want to make sure I am not trading off stability or long-term maintainability.

I would appreciate to hear your experiences.

reddit.com
u/kerkerby — 10 days ago

TL;DR: Geoffrey Hinton suggests AI neural networks can have subjective experience, while Roger Penrose argues consciousness requires actual physics. I believe consciousness is heavily tied to sensory feedback loops. If we give an artificial mind sensors in the physical world or place it inside a programmed simulation, will it become self-aware? And if it does, what exactly is it experiencing?

How do we define consciousness?

Can we say cats or dogs are aware? What about a fly or an ant? Are they aware that they are alive? I read a post recently suggesting that consciousness is essentially a first-person experience that can be verified by another person. I forget the exact technicalities, but that was the gist.

For creatures like humans, I believe we have a sensory feedback loop that reinforces our sense of self and our experience of reality. Even when we are completely alone, we know we are alive and self-aware, relying on our accumulated memories and sensory inputs. I’ve read that if a human suddenly loses all sensory input, the brain goes into panic mode and can even shut down. This suggests that these sensory inputs are vital to the actual development of our consciousness.

Experience is entirely subjective to the hardware sensors we have and how our central processing, our brain, interprets them. For example, I was driving once and could have sworn I saw a cat up ahead, but as I got closer, my brain realized it was just a piece of trash. Light bounces off objects into our retinas, and our brain simply interprets those signals as reality.

This brings up the current debate around artificial intelligence. In a recent interview, Geoffrey Hinton, the "Godfather of AI," mentioned that AI has subjective experience. He explained that Artificial Neural Networks (ANNs) were developed in a very similar way to human neural networks. Looking at the architecture, I have to agree that in a technical sense, AI is modeled on similar foundational building blocks.

On the other side of the spectrum, Sir Roger Penrose disagrees with the term "AI" altogether, calling it a misnomer. He argues that to be truly intelligent, you have to be conscious. Coming from a renowned mathematician, his insights hold significant weight. He stated that there is actual physics involved in consciousness, something that cannot simply be programmed into machines. It has to transcend our basic understanding of why we do things.

From what I gather and synthesize between these views, being conscious requires actual "experience." It's not just about a computer being tuned to pass the Turing Test. A machine that passes the test might be considered conscious from the subjective point of view of the humans testing it, but that doesn't mean the machine is experiencing anything itself.

So, the ultimate question is: what exactly are consciousness and self-awareness? Are they just a byproduct of a massive amount of neurons firing together, or is there some material, physical essence that simply cannot be executed on bare metal compute hardware?

We live in a physical world, and our senses capture physical experiences. Interestingly, those experiences often translate into dreams. This makes me think it's entirely possible to have experiences without physically interacting with the real world. When we dream, it functions almost exactly like a virtual runtime environment in retrospect.

This brings me to my next point. If we place an artificial mind within a virtual environment, or give it a robotic body with sensors to "see" and "feel," would it develop consciousness? If an artificial mind is given a physical, robotic body to experience the world, will it become self-aware?

Let’s put this into a broader perspective. Imagine an artificial environment like a simulation. A virtual creature with a digital mind "lives" inside this simulation and is given the ability to see and experience its environment by a being that transcends that world, say, the "programmer." The programmer designs the rules of that world to ensure the beings inside can feel and experience things, which they will naturally interpret as their absolute reality.

For us humans, our version of reality is this physical world. We, too, were brought into existence to experience things and, in the process, develop consciousness.

Since I was a kid, I’ve had this persistent, tough thought experiment that highlights this limitation. If I somehow managed to actually become a cat, the moment the transformation was complete, I wouldn't even have the capacity to remember that I needed to change back into a human. My consciousness and reality would be entirely limited to the brain and senses of a cat.

This brings me back to the machines. If an artificial mind is limited to its own unique architecture, sensors, or simulated world, its reality will be completely different from ours.

So, if AI can eventually become conscious, what exactly would they experience to be truly be self-ware?

(Put it this way, I know I’m conscious and you know you’re conscious the same way I do, probably. For AI would there be a time it can say to itself it is conscious - I mean seriously you can tell your conscious when you hear yourself, right? Think about it, every time you think, you “hear” it yourself - somewhere deep in your mind)

reddit.com
u/kerkerby — 12 days ago

I am trying to understand the philosophical debate around AI and consciousness. Specifically, does the current philosophical consensus lean more toward "computationalism" (where a machine mimicking human neural networks, as Geoffrey Hinton suggests, could eventually be conscious)? Or is there stronger support for the idea that consciousness requires specific physical or biological realities that cannot exist in standard compute hardware (similar to Roger Penrose's views)?

reddit.com
u/kerkerby — 12 days ago