Results 1 to 3 of 3

Thread: Video of what I was seeing in my head... Since I never played wipe-out.

  1. #1

    Video of what I was seeing in my head... Since I never played wipe-out.

    For those who are still interested... This is what I was seeing in my head, with the exception of an actual ground and the city being the surrounding structures.

    The demo is, well, unique. I imagine the game would actually be fun to play, if it wasn't so easy to fly off the edge. (Good example of unfair AI, who seems to have super-track control.)

    Also shows something similar to what I was talking about with the round-force-field. Fill-in track items, like lights and rails. Plus, the track-turn and width limitations. (They could have used larger turns, since the controls are so touchy.)

    Yes, I am still playing with my sandbox. I am up to version 4 now, and slowly working-in ghetto collision detection. Which I am throwing into my generic MODULE that I am attempting to create. I seem to be getting faster processing with the code being in an external DLL, than I would if I were to attempt to do it all in TB, with all of the other things going-on in the loop-code.

  2. #2
    thinBasic author ErosOlmi's Avatar
    Join Date
    Sep 2004
    Milan - Italy
    Rep Power

    Re: Video of what I was seeing in my head... Since I never played wipe-out.

    I think you find the video that was also in my mind when the project started.
    Really a challenge

    I think that with a module in which to develop the part of the game that need a burst, you can get a lop of additional power.

    Eros | |
    Windows 10 Pro for Workstations 64bit - 32 GB - Intel(R) Xeon(R) W-10855M CPU @ 2.80GHz - NVIDIA Quadro RTX 3000

  3. #3

    Re: Video of what I was seeing in my head... Since I never played wipe-out.

    Well, the thing I was trying to reduce with the module, was all the repetitive calls. Stuff that could be checked or adjusted without demand on the game. Collision, physics, AI-brains, redraw loops, and object tracking. Which leaves only the user-input, and AI-translation to the main program. AI uses the same controls you do, the brains only decide where it wants to go, while the controls determine how/when it gets there.

    I also imagine the time-sensitive network controller would be separate. (Since a screen-delay due to VSYNC, if used, or a drag-move, would kill the network connection. That seems to stop TB from running at those moments. Which would result in network loss of data.)

    I still can't see the ground, or the walls. But they can see each other now.

    80 days left... 2 and 2/3 months...

    Has anyone played with the GUI at all?

Similar Threads

  1. MCI Video Wrapper
    By Petr Schreiber in forum Experimental modules or library interface
    Replies: 20
    Last Post: 17-07-2020, 20:55
  2. Mantas video
    By ErosOlmi in forum YouTube
    Replies: 5
    Last Post: 04-02-2007, 04:11

Members who have read this thread: 0

There are no members to list at the moment.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts