Adrian Šošić

Adrian Šošić

Member of the Signal Processing Group at the Institute of Telecommunications, TU Darmstadt.

Merckstraße 25
64283 Darmstadt

Office: S3|06 250

+49 6151 16-21348
+49 6151 16-21342


go to list

Office Hours: Mondays, 15:30-18:30

Adrian Šošić received his B.Sc. and M.Sc. degrees in Electrical Engineering and Information Technology from Technische Universität Darmstadt in October 2010 and May 2013, respectively. During his studies, he spent time at University College Cork (UCC), Ireland. In his master thesis “Markov Assumptions for Non-negative Matrix Factorization” he investigated how fundamental concepts of linear dynamical systems and non-negative representations can be combined in order to learn parts-based models for sequential data. The representations developed in his thesis can be applied, for instance, in sequence classification, e.g. human action recognition from video data.

In September 2013, Adrian joined the Signal Processing Group at TU Darmstadt and commenced working on his Ph.D.

Research

Adrian's research interests center around topics from modern machine learning, statistical signal processing, image processing, decision-making, reinforcement learning, and game theory. He is especially interested in the methodology of Bayesian inference and his goal is to develop robust inference methods that allow to deal with uncertainty in a principled manner.

Currently he is working on inference methods in (large-scale) multi-agent settings as they appear in many biological systems, e.g. in animal swarms.

Adrian is collaborating with the Bioinspired Communication Systems Lab under the supervision of Prof. Dr. Heinz Koeppl and Prof. Gerhard Neumann from Lincoln Centre for Autonomous Systems Research.

Current Student Projects

Mengyao Zhang
(Master Thesis)
Independence Tests for Application in Driver Assistant Functions
Benjamin Graf
(Master Thesis)
Multi-agent and Continuum Reinforcement Learning
Sanket Shinde
(Master Thesis)
POMDPs with Discrete and Continuous Observations for Robotics

Completed Student Projects

Maximilian Hüttenrauch
(Master Thesis)
Guided Deep Reinforcement
Learning for Robot Swarms
08/2016
Frederik Bous & Edin Ragibović
(ATISSP Seminar)
Localisation with the Particle Filter 07/2016
Mahmoud El-Hindi
(Proseminar)
Reinforcement Learning 05/2016
Romain Gemble
(Bachelor Thesis)
Investigation and Implementation of Algorithms for Music Source Separation 02/2016
Sandro Kecanovic
(Bachelor Thesis)
Bayesian Non-negative Matrix Factorization 12/2015
Sandro Kecanovic
(Project Seminar)
Non-negative Matrix Factorization techniques 08/2015
Benjamin Graf,
Rosa Maria Carpio López
& Daniel Scheuermann

(ATISSP Seminar)
Reinforcement Learning for Black Jack 07/2015
Zhengqi Qian
(Master Thesis)
Reinforcement Learning in Swarm Systems 07/2015
Mateesh Bhave
(Master Thesis)
Learning Class Uncertainties using Neural Networks 02/2015
Sandro Kecanovic
(Proseminar)
Machine Learning: An Overview 12/2014
Jun Liu & Mengyao Zhang
(ATISSP Seminar)
Reinforcement Learning 07/2014

Student (Co-)supervision

Ahmed Abdelrahman
Sachin Kumar
Louisiane Lemaire
Dhrubajyoti Ghosh
Nicolas Wenzel
Jannis Weigend
Markus Schiffhauer

Teaching

WS 13/14
  • Digital Signal Processing Lab
SS 14
  • Digital Signal Processing Lab
  • Advances in Digital Signal Processing: Image and Image Processing
  • Advanced Topics in Statistical Signal Processing
WS 14/15
  • Digital Signal Processing Lab
SS 15
  • Digital Signal Processing Lab
  • Advances in Digital Signal Processing: Image and Image Processing
  • Advanced Topics in Statistical Signal Processing
SS 16
  • Advances in Digital Signal Processing: Image and Image Processing
  • Advanced Topics in Statistical Signal Processing

Publications

Group by: Date | Item type | No grouping
Jump to: 2017 | 2016 | 2014 | 2012
Number of items: 8.

2017

Šošić, A. ; Zoubir, A. M. ; Koeppl, H. :
A Bayesian Approach to Policy Recognition and State Representation Learning.
In: IEEE Transactions on Pattern Analysis and Machine Intelligence
[Article], (2017)

Šošić, A. ; Zoubir, A. M. ; Koeppl, H. :
A Continuum Model for Homogeneous Systems of Interacting Agents.
In: Swarm Intelligence (under review)
[Article], (2017)

Hüttenrauch, M. ; Šošić, A. ; Neumann, G. :
Guided Deep Reinforcement Learning for Swarm Systems.
In: AAMAS Workshop on Autonomous Robots and Multirobot Systems.
[Conference or workshop item], (2017)

Šošić, A. ; KhudaBukhsh, W. R. ; Zoubir, A. M. ; Koeppl, H. :
Inverse Reinforcement Learning in Swarm Systems.
In: AAMAS Workshop on Transfer in Reinforcement Learning.
[Conference or workshop item], (2017)

2016

Šošić, A. ; Zoubir, A. M. ; Koeppl, H. :
Policy Recognition via Expectation Maximization.
In: IEEE International Conference on Acoustics, Speech and Signal Processing.
[Conference or workshop item], (2016)

Hüttenrauch, M. ; Šošić, A. ; Neumann, G. :
Guided Deep Reinforcement Learning for Swarm Systems.
In: NIPS Workshop on Learning, Inference and Control of Multi-Agent Systems.
[Conference or workshop item], (2016)

2014

Guthier, T. ; Šošić, A. ; Willert, V. ; Eggert, J. :
sNN-LDS: Spatio-temporal Non-negative Sparse Coding for Human Action Recognition.
[Online-Edition: http://dx.doi.org/10.1007/978-3-319-11179-7_24]
In: Artificial Neural Networks and Machine Learning – ICANN 2014. Lecture Notes in Computer Science, 8681. Springer International Publishing , pp. 185-192.
[Book section], (2014)

2012

Guthier, T. ; Šošić, A. ; Willert, V. ; Eggert, J. :
Finding a Tradeoff between Compression and Loss in Motion Compensated Video Coding.
In: SIGMAP and WINSYS 2012 - Proceedings of the International Conference on Signal Processing and Multimedia Applications and International Conference on Wireless Information Networks and Systems, Rome, Italy, 24-27 July, 2012, SIGMAP is part of ICETE - The I.
[Conference or workshop item], (2012)

This list was generated on Mon May 22 08:59:43 2017 CEST.