Heroes of the Storm

Research Project using HotS, preliminary (promising) results!

HeroesoftheStorm 9 - Research Project using HotS, preliminary (promising) results!
Loading...

Hello everyone,

I got some preliminary results regarding my project (more info
www.playingwithnetworks - Research Project using HotS, preliminary (promising) results!here, first post here) and they look promising!

I would like to thank the people who already took part in the experiment, without you I would not be able to share anything today.

I still need additional data to properly validate and/or test my hypothesis, so if you did not already participate to the study I would really appreciate your help, it takes 10 mins to complete the survey:
https://www.playingwithnetworks.it/limesurvey/index.php/574585?lang=en

The aim of the project is to train neural networks with real game play behaviour, in the form of a game replay, to recognize some psychological characteristics.

The deadline for the paper is on the 15th of April, on that date I will try to share additional information with you.

Now, about results..


Warning: Semi-long read

Some terminology:

Accuracy: how well the model fitted on training data.

Loss: how much information was lost in the training data

Validation Accuracy: how well the model fitted on test data (i.e. data never seen before by the network)

Validation Loss: how much information was lost in the test data.


TL:DR

Although more data are needed to properly test the hypothesis the models started to fit with mixed results, specifically the validation accuracy are: HH: 79%; EMO: 68%; XTRA: 75%; CON: 87%; OTE: 62%.

Additionally some model over-fitted on the training data (i.e. openness to experience (OTE): accuracy: 88.28%; validation accuracy: 48.61%) meaning that they may fit very well if additional training and validation data are available.

It is also noticeable that the model was able to recognize 4 different players only by the replays with a precision of 99.63% (i.e. it was able to correctly recognize 797 of the 800 validation replays).


First experiment: 4 players

I initially tested whether or not it would be possible to train a network to recognize who is playing between 4 different players using only replays.

For this part of the experiment I asked some replays to my friends and used 1000 replays per player.

A total of 4000 replays were used to train the network with a validation split of 0.2 (3200 replays were used to train the network, 800 to validate it). After 200 or so epochs the networks started to fit very accurately.

The best result was a validation accuracy of 99.63%. In other words the network was essentially able to recognize 797 out of the 800 validation replays.

This part of the project was a success and I proceeded to the next experiment. As a side note it took over 48 hours to train the network.

Also, as future directions, it would be interesting to test if the network would still be able to recognize say 20 players or if some player typologies would emerge.


Second experiment: personological characteristics

I proceeded to test whether or not it would be possible to train a network to recognize some player’s personological characteristic by the way he is playing, in the form of a replay.

I first converted the survey scoring in 3 categories (0, low score, under 1 standard deviation; 1, medium score, between -1 and +1 standard deviation; 2, high score, over 1 standard deviation) and then I started to train the various models.

Загрузка...

352 replays were used for this part of experiment and the categories were not homogeneous in all the subscale (i.e. I don’t have 100 replay for each category in all the scales).

In a perfect world I would need 3000 replays homogeneously distributed, i.e. 1000 for each category.

so again, if you did not take part in the study I would really appreciate your help 🙂
574585?lang=en - Research Project using HotS, preliminary (promising) results!
https://www.playingwithnetworks.it/limesurvey/index.php/574585?lang=en


Honest-Humility

The results for this network are the following:

Best accuracy: 78.52%, loss: .53

Best validation accuracy: 79.19%, loss .55

57 of the 72 validation replay were correctly recognized.


Emotionality

The results for this network are the following:

Best accuracy: 84.51%, loss: .47

Best validation accuracy: 68.06%, loss .82

49 of the 72 validation replay were correctly recognized.


eXtraversion

The results for this network are the following:

Best accuracy: 86.62%, loss: .33

Best validation accuracy: 75.00%, loss .53

54 of the 72 validation replay were correctly recognized.


Agreeableness (versus Anger)

Unfortunately I was not able to test this subscale. I only have 8 replays for the high score category (i.e. 2) making it impossible to train a network.


Conscientiousness

The results for this network are the following:

Best accuracy: 87.32%, loss, .43

Best validation accuracy: 87.50%, loss .46

63 of the 72 validation replay were correctly recognized.

Although this model get the best validation accuracy as of today there is still quite a bit of information loss (.46).


Openness to Experience

The results for this network are the following:

Best accuracy: 88.28%, loss, .32

Best validation accuracy: 67.25%, loss 1.06

Although this model get the best accuracy as of today there is still quite a bit of information loss (.32). Also, it does have the best accuracy for the training data but the worst validation accuracy. Additional data are needed to improve the network.


(preliminary) Conclusion

Although not all networks works very well yet the first results look very promising. The accuracy of some of the models is quite good, the biggest problem is to generalize it to data never seen by the networks (i.e. increase validation accuracy, decrease over fitting).

The best accuracy and validation accuracy are often not on the same epoch, meaning that the network over-fit over time.

Over-fitting happens when the model is good at identifying training data but that same model won’t work very well for data that have never been seen. The best way to reduce over fitting is with additional data to try and train the networks with a wider set of cases, so it should be able to better generalize to replays never seen before.

So, although the first results look promising, additional data is needed to fully test and accept/reject the hypothesis.

Thank you for the attention, if you got questions I will be more than happy to answer them!

Cheers,

Giulio

Source: Original link


Loading...
© Post "Research Project using HotS, preliminary (promising) results!" for game Heroes of the Storm.


Top-10 Best Video Games of 2018 So Far

2018 has been a stellar year for video game fans, and there's still more to come. The list for the Best Games of So Far!

Top-10 Most Anticipated Video Games of 2019

With 2018 bringing such incredible titles to gaming, it's no wonder everyone's already looking forward to 2019's offerings. All the best new games slated for a 2019 release, fans all over the world want to dive into these anticipated games!

You Might Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *