|
Is there someon that knows a good tutorial for a 2d game in c++.
I searched for manny tutorials vut didnt find a good one.
|
|
|
|
|
In any case, game development in C++ will be tough, and if you are a beginners don't even bother.
A good one, if you mean the one that builds everything for you, won't be available easily. You need to learn a lot of things them merge them. JavaScript, C# etc. have engines that can supports 2D game development easily and in simple code as compared to C++. So, look into them.
2D breakout game using pure JavaScript - Game development | MDN[^]
Unity - 2D Game Development Walkthrough[^]
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
I learned the basics od c++ and java but i think you can make beter games in c++ . Just ned a tutorial for hwo. To make some games with c++
|
|
|
|
|
If you are a beginner then Java or C++ doesn't make sense or difference. You can develop performance-ready games in Java too. For that, I would recommend having a good understanding of many things, there are books available; tutorial or article won't be enough.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
We have articles on this topic: 2d game[^].
|
|
|
|
|
<pre lang="C++ MFC">I have a question for making use of multiple cores within a single thread. The scenario is like this, I need to do an intensive analysis behind the scene, so I create a separate thread to do this. As I want this thread to use multiple cores if they are available, OpenMP is used within this thread. Does this thread actually use multiple cores available? I guess it does, then my next question is how to kill this thread safely? i.e. anything needs to take care of compare with single core case? Why I am asking this question is that the killing process crashes the application when nultiple cores are used</pre>
|
|
|
|
|
TQTL wrote: the killing process crashes the application when nultiple cores are used Sorry, but it is impossible to view your code from out here.
|
|
|
|
|
The rule is: maximum of 1 core per thread.
You have to design your program for multi-threading, then launch as many thread as you want, they will spread across cores until you reach the number of cores, after that, some threads will share the same core.
Patrice
“Everything should be made as simple as possible, but no simpler.” Albert Einstein
|
|
|
|
|
Hi,
I countered the unwarranted downvote. The question is not low quality.
Have a look at Parallel Programming in Visual C++[^] and make sure you read everything.
Some thoughts:
1.) Yes, these parallelization libraries take advantage of multiple cores.
2.) Some of the libraries may be using Fibers[^] under the hood.
3.) Each library has a different technique for breaking out of the parallelized algorithm. For example... the concurrency namespace[^] utilizes a cancellation_token[^] while the OpenMP library uses the #pragma omp cancel[^]
Your Crash:
If you are experiencing a crash while using the Microsoft implementation of OpenMP... its probably because the OpenMP cancel directive is not supported. Microsoft only supports a partial subset of the directives[^]. As you can see... cancel is not in the list of supported directives.
If you are *not* using the Microsoft implementation of OpenMP then you probably need to #define OMP_CANCELLATION as the cancel directive is typically disabled for performance reasons.
Best Wishes,
-David Delaune
P.S.
If you want to convert your code from OpenMP to the Microsoft concurrency runtime[^] then here is an example.
How to: Convert an OpenMP Loop that Uses Cancellation to Use the Concurrency Runtime[^]
|
|
|
|
|
TQTL wrote: ...how to kill this thread safely? With few exceptions, this is generally a bad idea. A thread can internally kill itself, but that is a different question.
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"You can easily judge the character of a man by how he treats those who can do nothing for him." - James D. Miles
|
|
|
|
|
Just ask OpenMP how many processors it is using
int omp_get_num_procs();
If you want the answer from windows
SYSTEM_INFO sysinfo;
GetSystemInfo(&sysinfo);
int countCPU = sysinfo.dwNumberOfProcessors;
If you are expecting it to make MFC itself quicker it won't by very much at all unless it's just particular things like graphics or file access that is slowing you down.
To explain why we need to look at MFC which is an event drive system, meaning it fetches messages and dispatches messages that is it's basic behaviour. Part of that behaviour is it needs the events to process in the order they are in the message queue.
For example clicking on the X close button on the top of an application fires off 3 messages
WM_CLOSE
WM_DESTROY
WM_QUIT
Those messages must be processed in that order and the next message can not be started to process until the one before it is completed. So the required ordering of processing of the event queue trumps any multiprocessor threading you want to put on the message queue. Even things like WM_PAINT messages are in order you must paint the windows in that order. It's pretty safe to say most things in the message queue except probably things like WM_MOUSEMOVE are going to have a required order to them and multithreaded or not you can't process the next message without the previous completing. So you can't really speed the MFC message framework up.
The things you can speed up is the single processing of a single message. So in response to a WM_XXXXXXX message you may spawn threads to achieve the required task that is to be executed on that message in a faster time. If what you are doing is safe to windows it is safe to MFC because your threads will all end at exactly the same point as a single threaded program and you will either send back a LRESULT, Send/Post off a message or do some action and MFC would be unaware of the threads.
So basically if you multithread within a single message process or within an action or process that is called you can do it safely from MFC point of view and in fact MFC would be oblivious to the threading.
If you are having problems and crashing you are either not containing yourself to those guidelines and creating order problems for MFC or alternatively what you are doing is just broken on windows and the problem has nothing to do with MFC.
You can split an join threads just using C++11, make a windows console application and paste this code in to see that and mutexing stuff to control. Your OpenMP etc should give better performance but it makes it easy to prototype something.
#include <iostream>
#include <thread>
#include <mutex>
static const int num_threads = 10;
std::mutex mtx;
void call_from_thread(int tid) {
std::cout << "Launched by thread " << tid << std::endl;
}
void call_from_thread_with_mutex (int tid) {
mtx.lock();
std::cout << "Launched by thread " << tid << std::endl;
mtx.unlock();
}
int main() {
std::thread t[num_threads];
for (int i = 0; i < num_threads; ++i) {
t[i] = std::thread(call_from_thread, i);
}
std::cout << "Non mutex threads launched from the main\n";
for (int i = 0; i < num_threads; ++i) {
t[i].join();
}
for (int i = 0; i < num_threads; ++i) {
t[i] = std::thread(call_from_thread_with_mutex, i);
}
mtx.lock();
std::cout << "Mutex threads launched from the main\n";
mtx.unlock();
for (int i = 0; i < num_threads; ++i) {
t[i].join();
}
getchar();
return 0;
}
In vino veritas
modified 23-Nov-16 2:01am.
|
|
|
|
|
Hello All,
Please help out me.
I have enable Arrow Key functionality in MFC application,Now some where i required that if i clicked the checkbox then after Arrow key should not be enable.
Now i required the condition need to be in .rc file not in .cpp file .
I tried " #if(false) #endif " but how i need to provide condition inside into IF() and how it will work at runtime when i will change the condition.
modified 22-Nov-16 5:56am.
|
|
|
|
|
|
struct CLIENT_DATA
{
int m_nOnlineTime;
int m_nCredits;
};
for a tcp server which would handle at most 10240 clients.
I want to manage all the clients.there are two solutions
1: use a std:map, and a mutex over all map operation
2: use a C array. and assign an ID(0-10239) for each client if connected,If disconnected,the Id would be recycled.
if I use an array,mutex would only be necessary over the ID operations.such as assign an Id to a client when connected,recycle an Id when a client is disconnected.
in fact,they are not different if I only have to handle connect or disconnect.
but if I want to change the CLIENT_DATA for each client.
for the first solution:I have to lock the whole map;
for the second, I could maintain a mutex in each CLIENT_DATA.
my question is, solution 1 is much more easy to code.solution 2 seems much more efficient.
which one do you prefer?
Thank you.
|
|
|
|
|
Quote: use a std:map, and a mutex over all map operation Only understandable way to do this. But has some constraints. My assumptions won't consider that using a mutex is necessary unless your application allows asynchronous processing + write access for connected devices. If writing process is maintained by your server program, then you know how to do that — Hint: create a condition inside function and don't make it visible in threads, abstract the implementation.
Quote: use a C array. and assign an ID(0-10239) for each client if connected,If disconnected,the Id would be recycled. How about you don't use this. Suppose if there are only 10 clients connected, C array will still allocate all of the areas in RAM. Totally not worth it, and as you said — Id would be recycled —, how? What is your algorithm for that?
There are many parts missing in your post, such as, what data will they access, are they going to write data too, to same file or not — mutex and thread-safety makes sense if something is happening on same time at different threads, if not then don't worry about this.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
Ok,let me explain it in details
1 I have to maintain a collection for all the clients data,such as
struct CLIENT_DATA
{
std::string m_strIp;
int m_nOnlineTime;
int m_nCredits;
};
2 the client data update(by a timer for example) and the socket are in different thread
so If I use a map
bool Connect(std::string strIp)
{
boost::lock_guard<boost::mutex> lock(m_lock);
CLIENT_DATA* pData = new CLIENT_DATA;
pData->m_nOnlineTime = 0;
pData->m_strIp = strIp;
m_mapClients.push_back(strIp,pData);
return true;
}
void Disconnect(std::string strIp);
{
boost::lock_guard<boost::mutex> lock(m_lock);
std::map<std::string,CLIENT_DATA*>::iterator iter = m_mapClients.find(strIp);
if(iter != m_mapClients.end())
return;
CLIENT_DATA* pClientData = m_mapClients[strIp];
if(pClientData != NULL)
{
delete pClientData;
pClientData = NULL;
}
m_mapClients.erase(iter);
}
bool UpdateOnlineTime(std::string strIp)
{
boost::lock_guard<boost::mutex> lock(m_lock);
std::map<std::string,CLIENT_DATA*>::iterator iter = m_mapClients.find(strIp);
if(iter != m_mapClients.end())
return false;
CLIENT_DATA* pClientData = m_mapClients[strIp];
if(pClientData == NULL)
return false;
pClientData->m_nOnlineTime += 10;
return true;
}
I have to lock the map use the same mutex in connect,disconnect,updateonlinetime
if I use a array
void PrepareIds()
{
for(int i=0;i<10240;i++)
m_vecAvailableIds.push_back(i);
}
int FetchId()
{
boost::lock_guard<boost::mutex> lock(m_lock);
if(m_vecAvailableIds.size()<=0)
return -1;
int nId = m_vecAvailableIds.back();
m_vecAvailableIds.pop_front();
return nId;
}
int ReturnId(int nId)
{
boost::lock_guard<boost::mutex> lock(m_lock);
m_vecAvailableIds.push_back(nId);
}
bool Connected(std::string strIp)
{
int nNewId = FetchId();
if(nNewId == -1)
return false;
CLIENT_DATA* pData = m_arrClients[nNewId];
boost::lock_guard<boost::mutex> lock(pData->m_mutex_lock);
pData->m_bInUse = true;
pData->m_nOnlineTime = 0;
pData->m_strIP = strIp;
}
bool Disconnected(int nClientId)
{
Assert(nClientId>=0 && nClientId<10240);
CLIENT_DATA* pData = m_arrClients[nClientId];
boost::lock_guard<boost::mutex> lock(pData->m_mutex_lock);
pData->m_bInUse = false;
ReturnId(nClientId);
return true;
}
Now,the UpdateOnlineTime function and the CLIENT_DATA structure are changed
struct CLIENT_DATA
{
std::string m_strIp;
int m_nOnlineTime;
int m_nCredits;
bool m_bInUse;
boost::mutex m_mutex_data;
};
bool UpdateOnlineTime(int nClientId)
{
Assert(nClientId>=0 && nClientId<10240);
CLIENT_DATA* pData = m_arrClients[nClientId];
boost::lock_guard<boost::mutex> lock(pData->m_mutex_lock);
if(! pClientData->m_bIsInUse)
return false;
pClientData->m_nOnlineTime += 10;
return true;
}
There is only need to lock one CLIENT_DATA in case we may update the m_nCredits in some other place.
Now there are two kinds of mutex,one for Ids,one for each CLIENT_DATA
the most important point is when I iterate all the client datas
if I use a map:
UpdateClients()
{
boost::lock_guard<boost::mutex> lock(m_lock);
for(int i=0;i<m_mapClients.size();i++)
{
Update();
}
}
if I use an array
UpdateClients()
{
for(int i=0;i<m_arrClients.size();i++)
{
Update();
}
}
if the map size if very large,such as 10240;
lock the whole map would block the connect or disconnect operation.
modified 21-Nov-16 5:35am.
|
|
|
|
|
Some thoughts about your posts (the initial one and the above reply to Afzaal Ahmad Zeeshan).
Do you really need to store the IP? If so, don't store it as string.
There is no need to generate a (re-usable) connect ID. You already have one:
The connection socket handle.
Using the handle avoids also storing the IP because you can get the IP from the handle using getpeername . But using the handle makes using a map more appropriate.
There is also a solution to avoid locking at all:
Just let the functions be executed in the same thread. As far as I understand there are three events: connection, disconnection, and periodical updates. If connection and disconnection are events issued from other threads, implement signaling functions passing the socket handle.
My suggestion:
Use a map storing the socket handle and avoid locking by putting all associated functions into a single thread.
|
|
|
|
|
thank you
in fact there are two threads.
Thread A would get some CLIENT_DATA pointer to do something frequently,but would not change the map
Thread B Is the update thread include handling connect and disconnect,the map would only be changed here.
so If I update the map in thread B, It should be locked while updating.
it would block the Thread A at somewhere like GetClientData(int nClientId);
solution 2 is designed to avoid locking the whole CLIENT_DATA collection in Thread B.
my question is:
If the map is not vary large,use a std::map is fine,but if the collection of CLIENT_DATA is very large,
such as 10240,Is std::map ok?
|
|
|
|
|
So you have these conditions:
connect and disconnect
Seldom, single add / remove. Locking is no problem.
update
Periodically, all items must be accessed. Locking is a problem when updating requires a significant time.
query
Single? access in peridocally? short/long? intervals.
If locking is a problem depends on query frequency and number of items.
So it seems that only updating is critical. But as I understood you will update all items at once and know the max. number of items. Regardless of the choosen storage type, the first optimisation is reserving the required memory to avoid re-allocations and avoid members that use dynamic memory allocation like std::string .
So your requirements are:
- Inserting and deleting performance is not critical
- Iterating over all items is critical
- Accessing single items is not critical?
When using a map, use an unordered_map because it satifies the above (fast iterating over all elements but slow iterating over a subset of the elements).
When using an array, list, or vector, you have to perform a find to access single elements (which is also there but hidden when using a map).
When using a pre-allocated C array you have to implement also functions to add and remove elements where adding is simple (append) but removing requires moving memory. But this will be probably the fastest option when iterating over all items.
But I don't think that the performance of a plain C array is much better than using a container iterator (note that you must use the iterator rather then using the at or [] operator because that result in a lookup with maps and include out of range checks with other container types).
You may implement different versions and benchmark them to see the differences. But a max. number of 10240 should be not critical.
|
|
|
|
|
thank you very much
I would try and test it.
|
|
|
|
|
Hi
I created an event With the Following parameters CreateEvent(NULL,FLASE,FALSE,NULL);
From what I understand the third parameter says what the initial state of the object is
Signaled or NON signaled if it is FALSE non signaled the first time I executed the
WaitForSingleObject my thread will return and in addition set the object to signaled
Meaning all threads will have to wait till I call SetEvent
Thing is the first time around the executing thread never returns
|
|
|
|
|
I think you've got the meaning of "signalled" the wrong way around. The wait function will return only when the event is signalled. An auto-reset event will automatically reset the event to non-signalled when it returns, and manual reset event will not. Use the SetEvent to release other threads, or the ResetEvent to block other threads.
Think of the event of some operation that has to happen in your code. Other threads need to wait for that operation to complete. Once it has been completed, you signal the other threads that it is ok to proceed.
Cheers,
Mick
------------------------------------------------
It doesn't matter how often or hard you fall on your arse, eventually you'll roll over and land on your feet.
|
|
|
|
|
Okay how about the second parameter to CreateEvent if that is FALSE does that mean that after
a signal'ed event returns right away from WaitForSingleObject Windows will Automatically turn the event to NON signalled
|
|
|
|
|
Yes, it does. bManual = FALSE means that it is an Automatic ResetEvent after a wait operation.
Cheers,
Mick
------------------------------------------------
It doesn't matter how often or hard you fall on your arse, eventually you'll roll over and land on your feet.
|
|
|
|
|
Hello experts!!!
There is a component for Visual Studio C ++ "assimp" (Open Asset Import Library).
With the help of "assimp" on the screen built scene with a 3D object.
Can I with the help of "assimp" export the scene image with a 3D object into graphical file, for example BMP or JPEG?

|
|
|
|