u/musty_O

Practice Pronunciation For Korean (High Speaking Intensity)

This is my APP.

I'm an actual engineer, this is NOT slop coded, except that big blue start Button...That's vibe coded.

There is always a post or comment about speaking and pronunciation practice, now this is not as good as a teacher, but it's somewhere to start.

This is a paid product £20/month, you can try for free (no card required but signup required), currently has simple & advanced speaking practice, if you change to more B1,B2,C1 or C2 cefr then it changes to Q&A style for advanced speakers.

Many more features coming for reading, writing listening.

u/musty_O — 22 hours ago

The Trajectory Of Artificial Intelligence [D]

The Current State Of The Art

Scaling Laws have dominated the field since the Transformer’s invention in 2017, the development of the GPT series of models by openAI and the release of GPT3.5 (ChatGPT) in late 2022, where data, parameters, and training tokens have been scaled by orders of magnitude. Additionally non trivial algorithmic innovation has contributed step functions in model improvement, the most significant being “test time compute” which has enabled reasoning models by revealing a new dimension of scaling alongside the aforementioned data, parameters and training token dimensions.

The Transformer architecture is still and will remain a primary model behind large scale AI systems, however, there remains skepticism and disagreements from prominent figures in the field — about the extend to which this architecture will play a role in the long term future of the field. Fundamentally, the disagreement seems to stem from the fact that the transformer is a statistical prediction machine (next token prediction), that will inevitably suffer from limitations which are rooted in their nature.

The question “can the transformer/LLMs reach human level intelligence ?” further clarifies the aforementioned divide, as many would argue that the current LLMs are at “PhD level intelligence” while others will argue that they’re just “fancy auto complete”. Still, not many will dispute the fact that LLMs are capable of creativity, where by they’re capable of bridging together ideas from different domains to solve a problem, provided they’re prompted to do so, e.g “create an image of a cat in space”, “write a poem about computers in the style of …”

To extend, we are observing that combination of Reinforcement Learning, the ability of modern systems to learn from self generated data, coupled with the “creative randomness” enabled by their statistical nature and long training time, creates a recipe for LLMs to extend far beyond the available knowledge, and thus beyond the human experience (e.g move 37 AlphaGo) within niche domains such as programming, where verifiable rewards are abundant e.g “does the code test pass successfully ?”, this means an LLM can learn on it’s own within such domains. However for general capabilities where such signals are not available or easily self generated there is no clear way forward beyond learning directly from human generated data.

The Current Bleeding Edge

The bleeding edge of Artificial Intelligence is both broad and deep, spanning a wide spectrum of experimental approaches aimed at improving how machines learn and reason. On one end, there are efforts to refine the Transformer architecture itself; through innovations such as enhanced memory mechanisms (e.g DeepSeek Engram), more efficient inference techniques like n-bit model parameter quantisation, optimisations such as KV caching & quantisation (e.g PolarQuant), and architectural tweaks including modified residual connections (e.g. mHC).

On the other end, some researchers are exploring fundamentally different paradigms for learning. Notably, Yann LeCun has proposed Joint Embedding Predictive Architecture (JEPA), which aims to move beyond traditional generative modelling toward systems that learn more abstract representations of the world. In parallel, evolutionary and compositional approaches are gaining some attention — where existing foundational models are iteratively combined, mutated, and selected to produce new models with capabilities optimised for specific user defined objectives e.g better writing skills.

Taken together, the frontier of AI is not defined by a single direction, but by a diverse set of competing and complementary ideas (many of which are not mentioned here), each aiming to achieve more efficient, capable, and general forms of intelligence.

reddit.com
u/musty_O — 3 days ago

The Trajectory Of Artificial Intelligence [D]

This is my first piece, where I very briefly dive into where AI is going, this is a very shallow article, doesn't go into depth, just lays out the 4-6 visible ideas within the field.

medium.com
u/musty_O — 3 days ago