r/opengl
Using OpenGL 4.6 as a modern(ish) API
I've looked around and it seems that resources for 4.6 are fewer and far between especially compared to something like 3.3, and it looks like basically no major games really used 4.6 either.
I'm curious on how viable it actually is for like games, engines, etc. or if there's some astute reason as to why no one seems to use this version, because supposedly it's better optimized for modern hardware (albeit not on the same level as something like Vulkan)
My views on LearnOpenGL.com
Hi, I recently got interested in learning graphics programming with OpenGL and I've heard a lot about learnopengl.com. So, I gave it a try.
I just completed the "Hello Triangle" section of the page and honestly I learned quite well about vertex buffer objects, vertex shaders and vertex attribute objects. Though, honestly I had to ask ChatGPT about some concepts that didn't "click" to me immediately like the use of glVertexAttribPointer and why VAO is important. but, I finally got it.
The thing that I observed and which bothers me a bit is that the author explains a concept and drops the code snippets that correspond to that concept but, They don't tell you where the code should go in your existing codebase. In one moment it goes on to explain the use of VBO then drop some code snippets then in another moment how the vertex attribute pointers work then drop some other snippets and then finally explain the use of VAO for storing all those configurations and which buffer data it use and drop some snippets thereafter. but, It doesn't seem to explain how it all should be laid out in order. I had a hard time understanding it.
Okay, It could be just me. but, Anyways it's really a helpful resource. I might get used to it after a while.
I am just here to say Hi to the community. I hope I do well in my journey to learn graphics programming. Let me know what you think about this.
What is the simplest 3D physics library that I can plug into my OpenGL project?
I wanted to add simple physics interactions to my project, but I cannot, for the life of me, figure out how to properly link any of the popular physics libraries to my Visual Studio/OpenGL project. I have been trying for a few hours now to get something/anything to work, and I am about to give up.
I don't need it to do much. Just basic collisions and simple shapes along a 3D plane.
With GLEW and SDL, I can just download the binaries, add the include folder, and reference the lib files. With every physics engine I have found (PhysX, Bullet, Jolt, ReactPhysics3D, Chrono), it seems as though I have to build it myself, scour the directories for the lib file I should be referencing, and then try to figure out what Visual Studio is looking for to resolve the LINK errors. I got close to getting Bullet to work, but I was runtime errors when following the example code that is provided on Github.
So I am asking: what is the simplest 3D physics library that I can plug into my project? Or there a more verbose guide for I can follow for adding physics to an OpenGL visual studio project?
This is a screenshot from my voxel game engine, which I am using to make a Minecraft clone. I am a Junior in high school and I am working on this as a hobby, which hopefully in the future I will be able to present to a college or workplace. I implemented my own GUI system which nicely works with my event handling system that I made from scratch as well. You cannot see it but the game also has an entity system, allowing entities to rotate and move around with full collision. The engine also has a voxel placing and remove system, which uses a DDA algorithm. Right now there is no inventory system (although I will most likely work on it in the future and it shouldn't be hard given my GUI is modular and adaptable), so right now number keys are mapped to voxel ids, which are placed when the user right clicks on a valid surface. Like many other voxel engines this one uses Perlin noise, in this case from the LibNoise library. The button that toggles "Complex Lighting" just toggles the use of ambient, diffuse, and specular lighting in the fragment shader for the voxels, which I have been playing around with today. In the screenshot this is toggled on and the light reflected off the generated ocean is from a light source I have been using for testing. Normally all lighting is done once on chunk generation and each vertice has a brightness attribute that is passed to the shaders. I am using a 5x5 atlas system for textures, which can easily be expanded when need be (like adding more different types of voxels for example). Greedy meshing has not been implemented yet however there is face culling. Right now some data is being stored redundantly which is causing the program to use more ram than it should, which I am currently fixing. I have a public github repo in which you can download this project, as linked here: https://github.com/Mr-Snazz/SnazzCraft.
(Help) Opengl debugger with SDL
Wanted to make opengl debugger work with this function from learnopengl (In Practice -> Debugging), they did it with glfw, so I wanted with sdl3 instead. But my engine crashes on start. I dont know what to do, really
what to do after basics.
I wanna make something simple in opengl like a grass renderer or something along those lines. I want to be able to do it on my own with only man pages/documentation. If I would like to do it in c how much experience should I have in the language? Also, are there any new resources to learn from. I started a long time ago and am just coming back.
Hey everyone!
Learning computer graphics always felt like a "draw the rest of the owl" situation to me. There are plenty of art showcases out there, but nowhere to actually practice the technical fundamentals.
To fix that, I built Shader-Learning is a platform that’s basically LeetCode but for CG. We have over 300 tasks now, each with detailed theory to back it up.
Today, we are launching our first-ever contest. It is 100% free, no ads, just a fun way to test your skills.
The Vibes
Unlike traditional shader art competitions, https://shader-learning.com focuses on technical problem-solving. You get a task, an editor, and your code is validated against specific requirements in real-time.
The Details
- When: Right now! (Starts May 1st, 7:00 AM UTC)
- Duration: The contest window is open for 48 hours, but once you hit "Start," you have 120 minutes to finish.
- Language: GLSL or HLSL.
- The Challenges: 3 tasks covering SDFs & Trig, Calculus foundations, and Ray Tracing.
- How to win: The person who solves the most tasks in the shortest time wins. Please note that each incorrect submission adds a time penalty to your final score.
👉 Join here: https://shader-learning.com/contest
(Note: If you get a 404, just hit Ctrl+F5 to refresh your cache!)
Join our discrod and follow us on instagram so you dont miss any updates
I’m currently working on a 3D character project intended for use on a website, and I’ve run into several challenges specifically related to creating and optimizing a cape. I’m hoping to get some detailed advice from people who have experience with cloth simulation, real-time rendering, and web optimization.
Project Overview:
The goal is to display a 3D character on a website (real-time or near real-time rendering).
The character includes a cape, which I want to feel dynamic and visually appealing.
I’m using Marvelous Designer to create and simulate the cloth because of its realism.
Main Challenges:
- Achieving Natural Cloth Motion (Wind / Flowing Cape):
I want the cape to behave as if it’s being affected by wind—flowing, waving, and moving naturally. However:
The simulation often looks stiff or too heavy.
Increasing wind values sometimes causes chaotic or unrealistic movement instead of smooth flowing motion.
I’m unsure how to balance fabric properties (weight, stiffness, damping, etc.) to get a believable result.
- Creating a Seamless Loop Animation:
Since this will be used on a website, I need the animation to loop continuously without noticeable jumps. Right now:
The simulation always has a clear start and end point.
When I try to loop it, there’s a visible “reset” or pop.
I’m not sure if I should simulate a long animation and trim it, manually match start/end frames, or use another technique entirely.
- Heavy Geometry / Performance Issues:
This is probably my biggest concern:
The cape mesh exported from Marvelous Designer is extremely dense—sometimes 10x heavier than the base character.
Reducing particle distance helps slightly but quickly destroys the shape/detail.
This level of geometry is not suitable for web use (especially with WebGL / Three.js or similar frameworks).
What I’ve Tried So Far:
Adjusting particle distance and mesh density
Tweaking fabric presets and physical properties
Testing different wind controllers and simulation strengths
Exporting and trying basic optimizations in other 3D tools
Where I Need Guidance:
I would really appreciate insights on the following:
A. Workflow & Tools
Is Marvelous Designer the right tool for this kind of use case (real-time/web)?
Should I simulate in Marvelous Designer and then retopologize in another software like Blender or Maya?
Are there better pipelines for game/web-ready cloth (e.g., using baked animations, rigged bones, or shader-based tricks)?
B. Optimization Techniques
Best practices to drastically reduce polygon count while preserving silhouette
Retopology workflows for cloth
Using normal maps or baked details to fake high-resolution cloth
C. Animation Approach
How professionals create looping cloth animations
Whether to use simulation vs. manual animation (bones, shape keys, etc.)
Tips for blending start/end frames seamlessly
D. Real-Time Considerations
Recommended polycount ranges for animated assets on the web
Whether I should avoid cloth simulation entirely and fake it for performance
Any tricks specific to WebGL / Three.js / Babylon.js pipelines
E. General Advice
Common mistakes when using Marvelous Designer for real-time assets
Any tutorials, courses, or resources that explain a complete pipeline from simulation → optimization → web integration
I’d really appreciate detailed answers, workflow breakdowns, or even small tips that could point me in the right direction. Thanks a lot in advance!
Previous drawing cleared when switching shader
So I am planning to use instancing to draw wheels for a truck. I draw the truck first and then the wheels. Since I need to use a separate shader for instancing, Whenever I bind that shader, the truck disappears and only the wheels I draw are visible.
For the shaders, I have the same fragment shader (main.fs) and for the vertex shaders I have two (main.vs & wheels.vs) which I have attached below.
> main.vs
#version 410
layout(location = 0) in vec3 positionIn;
layout(location = 1) in vec2 textureCoord;
layout(location = 2) in vec3 normalIn;
out vec3 position;
out vec2 textureCoordinate;
out vec3 normal;
uniform mat4 projMat;
uniform mat4 transMat;
void main() {
normal = (transMat * vec4(-normalIn, 0)).xyz;
textureCoordinate = textureCoord;
position = (transMat * vec4(positionIn, 1)).xyz;
gl_Position = projMat * transMat * vec4(positionIn, 1);
}
> wheels.vs
#version 410
layout(location = 0) in vec3 positionIn;
layout(location = 1) in vec2 textureCoord;
layout(location = 2) in vec3 normalIn;
out vec3 position;
out vec2 textureCoordinate;
out vec3 normal;
uniform mat4 projMat;
uniform vec3[10] offsets;
uniform vec2[10] rotationXY;
void main() {
vec3 offset = offsets[gl_InstanceID];
vec2 rotation = rotationXY[gl_InstanceID];
mat4 transMat = mat4(
cos(rotation.y), 0, sin(rotation.y), 0,
0, 1, 0, 0,
-sin(rotation.y), 0, cos(rotation.y), 0,
0, 0, 0, 1
);
transMat *= mat4(
1, 0, 0, 0,
0, cos(rotation.x), -sin(rotation.x), 0,
0, sin(rotation.x), cos(rotation.x), 0,
0, 0, 0, 1
);
normal = (transMat * vec4(-normalIn, 0)).xyz;
textureCoordinate = textureCoord;
position = (transMat * vec4(positionIn + offset, 1)).xyz;
gl_Position = projMat * vec4(position, 1);
}
And this is how I render the object. Note that the child.render() has the exact same structure, apart from the fact that no transformation matrix is sent and glDrawElementsInstanced is used instead.
texture.bind();
glUniformMatrix4fv(transMatrix, false, getTransformationMatrix());
glBindVertexArray(vertexArray);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glDrawElements(mode, vertexCount, GL_UNSIGNED_INT, 0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glBindVertexArray(0);
texture.unbind();
for(LCObject child : children) {
child.getShader().bind();
child.render();
}
PS: Im on a Mac M1 if that is of any importance.
Hi everyone!
This is a bit of an update on the game I'm working on, TD inspired by Warcraft 3 where you are tasked with defending "something", I cannot talk much about it because the game has a story and campaign, but this is just a small update on the first act/map: Forest Pass.
I need to rework a bit the UI, today I added new textures, flowers and decorations, baked in AO, etc.
The game features 3 classes: mage, ranger and holy knight. Each class has 4 different abilities (you can choose only one of them, therefore 4 different ways of playing each class) and each ability has 3 different upgrades to choose from (therefore 3 different builds to specialize into for each ability). You also have talents that give you raw stats.
The first map features 10 distinct enemies, a total of 30+ waves before the final boss.
If you guys are interested in seeing more of this game, on the next post I can add a discord link.
https://pastebin.com/jbufHHk9 The pixels render as black, even though the data is sent to the GPU. I've swapped out the BufferedImageBasedTexture with a SimpleTexture that just loads a file using stbimage, and that works perfectly, and the shader is fine, but this texture thing doesn't work. It creates a valid texture ID with a blank texture attached to it. Nothing is wrong as far as I can tell, but it still renders black. Please help. I've tried byte arrays but lwjgl doesn't accept them, and I've tried memoryutil but that doesn't work.
Hello,
I have been trying to make a 3d snake game in OpenGL, I have the basic game logic working but right now every section of the snake is just a square I render at some point (x, y, z), I want to create a single cylinder like object that is rendered for the body that grows and shrinks at run time. I'm just wondering how to go about that?
Should I do the calculations in world space or the Local space?
Should I be using functions like glGetNamedBufferSubData?
Thanks in advance