# Thread: GPU RayShooting and autonomous racer in track without waypoints

1. ## Re: GPU RayShooting and autonomous racer in track without waypoints

LOL, don't hang yourself... :'(

I was just saying that the drunk mouse doesn't know which way is forward, or if he is even on the track. (I know you were just playing with the feelers. I was being a smart {Fill in bad word here}.)

The "Distance", value in the formula... Which is +(distance), and I put +(distance*10).

You are setting a specific distance, over a timeframe... (Distance is (speed over time)) which you than again, divide by time... (You add the distance form the point to the speed, and then divide by time?)

Distance form the point should be used to adjust speed, only when the distance from the point is within range, and only if adjustment is required. (I am not sure why you add the distance from the target, to the target PUSH value.)

MIN(slow,fast) will not be good for actual game-play... because that turns the player into a brick-wall if you collide into them. When you are hit from behind, you move faster than your max-speed, and 0 is min? (I may not understand the implementation of that code, but multiplying the distance * 10 was the only solution that produced realistic motion, no matter what speed the vehicle was moving. I am starting to think that is just a generic code you use to test... but I thought it was part of the AI code, which would be moving the other players.)

I am working on the physics values, which you should be able to use without having to re-formulate for every style of sensor calculation. You only have to output the FORCE modification of the sensor/control. I will make a separate post for GAME PHYSICS, since they are not specific to AI, but part of it.

I LOVE the sensors... I just didn't understand how they work, and the movement code was not helping me understand, since it stopped the mouse as it was trying to move forward.

I still don't understand how this code knows if it is going the correct direction. (Without way-points, it will still take the shortest path, if it can feel its way there. Like at an intersection where it could be bumped onto a shorter destination, or if it were flipped out of the track, onto the scenery.)

2. ## Re: GPU RayShooting and autonomous racer in track without waypoints

Hi Jason,

LOL, don't hang yourself...
Ok, I won't

Thanks for explanations!

The "Distance", value in the formula... Which is +(distance), and I put +(distance*10).
Did you mean this part?
Speed = MySensors(%sensor_front).distance
' ...
TBGL_EntityPush(%sScene, %eVehicle, 0, 0, Speed/FrameRate)
I must admit my naming of variable was dirty indeed.

In fact the Speed/FrameRate equals to:
<distance> / ( 1 / <timeOfFrame>)
which equals too
<distance> / <timeOfFrame>
which is basically speed shift. Like from that velocity = distance / time basic formula.

I LOVE the sensors... I just didn't understand how they work
Time for explanation ...
When you render image using OpenGL, you are able to retrieve x,y,z value of every pixel rendered.
So ... to get distance sensor-obstacle, I used dummy camera put at place and direction of sensor, rendered image ( at supertiny resolution ) from there ( just to memory, not to screen ) and read the x,y,z value of center pixel. Then I calculated distance sensor-(x, y, z) ... and that is what goes into sensor.

I am looking forward to your physics calculations, I am currently working on particles again, to go according to plan

How the drunk cube knows where to go?
We take as fact the track has left and right border. Then in case distance from left sensor is too small, we turn right; if distance from right sensor is too small we go left. That keeps the object on "drunk man straight line aproximation".

The front sensor is used to calculate that controversial speed - if there is lot of space in front of cube, it can afford to go faster, if little space, it is going slower.

Petr

3. ## Re: GPU RayShooting and autonomous racer in track without waypoints

So ... to get distance sensor-obstacle, I used dummy camera put at place and direction of sensor, rendered image ( at supertiny resolution ) from there ( just to memory, not to screen ) and read the x,y,z value of center pixel. Then I calculated distance sensor-(x, y, z) ... and that is what goes into sensor.
Rendering the image all the time and for every sensor would mean a lot of performance eaten by that when you have up to 7 vehicles (my guess as max vehicles on the track) to calculate.

Why don't you look into ray cast collision?

4. ## Re: GPU RayShooting and autonomous racer in track without waypoints

Which direction...

If it gets hit and faces the wrong way... (How does it know that left is not left, and right is not right?)

I am sure it will follow the wall quite well, backwards, staying equal distance from the left/right, which is now right/left. (I sort-of get the camera thing... I didn't realize cameras could be "unrendered", but I am familiar with mirror/cam tricks for reflections.)

Sorry, the distance comment was towards the other AI, with the track. (That trick did not work here. LOL.)

Found a small glitch with the game-timer also... (If not called initially, it is 1 on the first cycle. If time is greater than 1FPS, the value is 1. EG, 0.5 FPS = 1FPS... Would eminently cause a crash, when the game should be seeing that as an excuse to shut-down, or lower detail. Compared against the highrestimer from the core. Actually, the game timer is a frame-speed killer. It takes more time to process than a normal compare of the core highrestimer.)

For multiple cars, couldn't you just move the camera to each cars position, render, calculate, and cycle through the next? (With 60 FPS, you could check even-cars one round, and odd cars the next... trusting that your last scan value was good for 1/30th a frame. If looking ahead, and setting a destination... you could do one car per frame.)

5. ## Re: GPU RayShooting and autonomous racer in track without waypoints

Rendering the image all the time and for every sensor would mean a lot of performance eaten by that when you have up to 7 vehicles (my guess as max vehicles on the track) to calculate.

Why don't you look into ray cast collision?
Hi Mike,

of course I am looking into it
I just wanted to do dummy test before work to see how it will be used and so on.
Jason asked me about how this posted script works ( at least that is how I understood it ), so I explained it.

Jason,

Actually, the game timer is a frame-speed killer
I am not sure what do you reference as game timer. If you mean TBGL_GetFrameRate, then please know it basically does exactly the same ( even uses same API ) as HiResTimer on most PCs.
It just limits the value to never return zero, to avoid division by zero. It also detects architecture it runs on during module load, so if performance counters are not present, it uses getTickCount safely.

The speed impact it does is microscopic and does not degrade game performance in serious way.

It cannot forcast rendering time for first frame, if you like, move the TBGL_GetFrameRate after render, and initialize FPS variable with your estimation to be used for first frame.

I will continue work on math approach on rays ( and collision generally ) in next weeks.

Thanks,
Petr

6. ## Re: GPU RayShooting and autonomous racer in track without waypoints

When I said it was a frame-speed killer... I was talking about the impact of using the value it returns, to calculate the XZ position. Not the FPS... The result of calculated speed, (distance traveled in a second), from low return values.

At 200 FPS, you are moving 1/200th, and that allows for good formula math.

When it falls to 60 FPS, you are moving 1/60th, and jumping through objects and wall detection can look spastic.

When it goes to 10 FPS, you are moving 1/10th, and if 1/10th is 3x your body-size/sensor-length, you fly through most objects.

This is why I set the gameTimer value to be >2 (That is 1/2 a second. or 2 FPS), But I will have to change that to a higher value, once actual collision detection begins. At 20 meters per second (72KPH, or 44.73MPH), 1/2 a second or one gameFrame is 10 meters. Walls, and other players would have to be 20 meters thick, to detect collision, and sensors would have to be equally as long.

However, at 60FPS, 20 meters per second is only 0.33 meters of motion per game-frame. We can be 1/3 of a meter thin, and be borderline of evading collision detection. (EG, your sensor only has to look ahead about a foot, or your probe only has to be a foot long to see a wall fast enough, and make adjustments.)

I am just trying to bring this into view, before we wonder why our code works with one ship, at 600+ fps, but fails when the FPS begins to drop.

For your sensor... if you just used it to pick a valid spot to move to... (Like a portable track), but selecting a spot in the distance... You would not have to check every frame, only when you reach your set destination, or come close to it.

EG, (Based on how I think it works, or is possible of working.) This would be an alternative to scanning unlimited distance, and seeing "Space", which might not register. Also, as opposed to looking only a fixed distance. (Your velocity should be a factor in how far you look. If you are going 2 MPH, no point in looking a mile ahead, same if you are going 40MPH, no point in only looking one meter ahead.)

Camera on nose... Clipped distance view of about 100 meters... Set a sprite on your nose, and push it away until it is gone from view... At that point... you rotate the camera Left and Right, until it comes back into view.

The more time there is left, the more you keep pushing it back, until you reach the 100 meter limit.

That will let you avoid other players on the same Y axis, and allows you to select a location towards an inner-turn, or an outer turn. (Provided that you looked ahead to know the turn direction. Actually, the track builder can calculate that for us.)

Not sure how well that will work on an incline... but it can completely add a thinking element to the AI for avoiding collision, prior to it happening. (If something is in front of your destination, you can select a position to the left or right of it.)

7. ## Re: GPU RayShooting and autonomous racer in track without waypoints

Sorry to dredge-up an old post... but until the intersect code is finished, I am playing with this camera model to get the required values needed. (Only using it for floor and gravity at the moment. I don't trust that drunk mouse. LOL.)

Petr, I noticed in the demo, that there is code for fonts... But I don't see any text in the demo. Is this a bug or is that part of the code incomplete or turned-off somewhere? (I don't have a clue how to use the fonts here.)

Just a curiosity, about the fonts... Do you have separate code so we can use fonts inside the 3D area, and another set of fonts or a layer for the 2D (Screen) area? I am envisioning 3D words flying around the screen like sprites, but also envisioning GUI notices being unbound to 3D, positioned on the screen directly. (Thinking back to your FOV font issue.)

8. ## Re: GPU RayShooting and autonomous racer in track without waypoints

Hi,

with latest ThinBasic, the script from first post of this thread shows green text in upper left corner of the screen, as seen on screenshot.

TBGL provides 2 approaches for fonts, have a look in TBGL help file on Functions list/Fonts and text plotting.

I would like to ask anybody to redownload script from first post of this thread and tell me if there is any problem with displaying text.

Thanks,
Petr

9. ## Re: GPU RayShooting and autonomous racer in track without waypoints

Continuation of this topic, related to ATi XPRESS issues can be found here:
http://community.thinbasic.com/index.php?topic=2268.0

Petr

Page 2 of 2 First 12

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•