Veröffentlichung zu Deep Q-Networks erschienen / Article on Deep Q-Networks published [22.11.23]
Die Autoren Pilar von Pilchau, Pätzel, Stein und Hähner publizieren auf der 2023 International Joint Conference on Neural Networks. – The authors Pilar von Pilchau, Pätzel, Stein und Hähner publish at the 2023 International Joint Conference on Neural Networks.ENGLISH VERSION BELOW.
Es gibt Neuigkeiten zur Forschungsarbeit am KI-Fachgebiet. Eine weitere Veröffentlichung ist zum Thema Deep Q-Networks erschienen.
Hier kommen die Hard-Facts zur Veröffentlichung.
Titel:
Deep Q-Network Updats for the Full Action-Space Utilizing Synthetic Experiences
Autoren:
Wenzel Pilar von Pilchau, David Pätzel, Anthony Stein, Jörg Hähner
Venue:
2023 International Joint Conference on Neural Networks
Abstract:
Deep-Q-Networks are built in a way that, given a state, they predict the Q-values for the entire action-space. However, given an experience, the training update only incorporates the loss value for a single action—the one that has actually been executed. This is due to the rewards and follow-up states (required for computing the loss via the temporal-difference error) associated with the other actions being unknown. With these missing values at hand, or at least estimates of them, an update over the entire action-space would be possible. We present the Full-Update-DQN which is able to do just that. Sub-losses are weighted to compensate for uncertainty and noise and we are able to show in four different experiments in sparse reward settings, that our approach is able to solve these problems more consistently and even faster than the original approach.
Unter folgendem Link kommen Sie zur gesamten Übersicht der bisherig veröffentlichten Forschungsarbeiten des Fachgebiets: https://ki-agrartechnik.uni-hohenheim.de/veroeffentlichungen
--
There is news about the research work at the AI department. Another publication has appeared on the topic of Deep Q-Networks.
Here are the hard facts about the publication.
Title:
Deep Q-Network Updats for the Full Action-Space Utilizing Synthetic Experiences
Authors:
Wenzel Pilar von Pilchau, David Pätzel, Anthony Stein, Jörg Hähner
Venue:
2023 International Joint Conference on Neural Networks
Abstract:
Deep-Q-Networks are built in a way that, given a state, they predict the Q-values for the entire action-space. However, given an experience, the training update only incorporates the loss value for a single action—the one that has actually been executed. This is due to the rewards and follow-up states (required for computing the loss via the temporal-difference error) associated with the other actions being unknown. With these missing values at hand, or at least estimates of them, an update over the entire action-space would be possible. We present the Full-Update-DQN which is able to do just that. Sub-losses are weighted to compensate for uncertainty and noise and we are able to show in four different experiments in sparse reward settings, that our approach is able to solve these problems more consistently and even faster than the original approach.
The following link will take you to the complete overview of previously published research papers in the field: https://ki-agrartechnik.uni-hohenheim.de/veroeffentlichungen