<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0">
  <channel>
    <title>Informatisches Kolloquium</title>
    <link>https://lecture2go.uni-hamburg.de/l2go/-/get/l/5013</link>
    <description><![CDATA[ ]]></description>
    <language>en-US</language>
    <copyright>University of Hamburg 2025</copyright>
    <itunes:author>University of Hamburg</itunes:author>
    <itunes:summary><![CDATA[ ]]></itunes:summary>
    <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-71151_2025-03-24_14-15.jpg?lastmodified=1743517438657"/>
    <pubDate>Tue, 01 Apr 2025 16:24:01 +0200</pubDate>
    <lastBuildDate>Tue, 01 Apr 2025 16:24:01 +0200</lastBuildDate>
    
    <atom:link href="https://lecture2go.uni-hamburg.de/rss/5013.mp4.xml" rel="self" title="Informatisches Kolloquium (MP4 Feed)" type="application/rss+xml"/>
    <item>
      <title>Development of Compositionality through Interactive Learning of Language and Action of Robots Using Free Energy Principle</title>
      <description><![CDATA[The focus of my research has been to investigate how cognitive agents can develop structural representation and functions via iterative interaction with the world, exercising agency and learning from resultant perceptual experience. For this purpose, my team has developed various models analogous to predictive coding and active inference frameworks based on the free energy principle. Those models have been used for conducting diverse robotics experiments which include goal-directed planning and replanning in a dynamic environment, social embodied interactions, development of the higher cognitive competency for meta-cognition. The current talk highlights a set of emergent phenomena which we observed in our recent robotics study focused on embodied language [1]. These findings could inform us how children can develop compositional linguistic competency only through limited amount of sensory-motor-language associative learning.

Reference:
[1] P. Vijayaraghavan, J. Queißer, S. Flores, J. Tani, (2025). Development of compositionality through interactive learning of language and action of robots. Science Robotics, 10, eadp075.]]></description>
      <itunes:summary><![CDATA[The focus of my research has been to investigate how cognitive agents can develop structural representation and functions via iterative interaction with the world, exercising agency and learning from resultant perceptual experience. For this purpose, my team has developed various models analogous to predictive coding and active inference frameworks based on the free energy principle. Those models have been used for conducting diverse robotics experiments which include goal-directed planning and replanning in a dynamic environment, social embodied interactions, development of the higher cognitive competency for meta-cognition. The current talk highlights a set of emergent phenomena which we observed in our recent robotics study focused on embodied language [1]. These findings could inform us how children can develop compositional linguistic competency only through limited amount of sensory-motor-language associative learning.

Reference:
[1] P. Vijayaraghavan, J. Queißer, S. Flores, J. Tani, (2025). Development of compositionality through interactive learning of language and action of robots. Science Robotics, 10, eadp075.]]></itunes:summary>
      <itunes:duration>01:11:03</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-71151_2025-03-24_14-15.jpg?lastmodified=1743517438657"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/71151</link>
      <pubDate>Mon, 24 Mar 2025 14:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/71151</guid>
    </item>
    <item>
      <title>Why do I still need to grade all those exams?</title>
      <description><![CDATA[Giving feedback on free-text answers (in the form of grades or helpful hints) is a core educational task. Despite a large body of NLP research on the topic, assisting teachers with this task remains challenging. In this talk, we outline the linguistic and external factors influencing the performance level that NLP methods may reach for a given question. However, even in settings where automatic performance rivals humans, there are various practical requirements often overlooked in research that hinder adoption in the classroom and beyond.]]></description>
      <itunes:summary><![CDATA[Giving feedback on free-text answers (in the form of grades or helpful hints) is a core educational task. Despite a large body of NLP research on the topic, assisting teachers with this task remains challenging. In this talk, we outline the linguistic and external factors influencing the performance level that NLP methods may reach for a given question. However, even in settings where automatic performance rivals humans, there are various practical requirements often overlooked in research that hinder adoption in the classroom and beyond.]]></itunes:summary>
      <itunes:duration>00:56:34</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-70397_2024-11-04_17-15.jpg?lastmodified=1731927524152"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/70397</link>
      <pubDate>Mon, 04 Nov 2024 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/70397</guid>
    </item>
    <item>
      <title>Scalable and Fast Cloud Data Management</title>
      <description><![CDATA[Database research at the University of Hamburg is centered around scalable technologies for cloud data management and connects the dots between traditional database systems, web caching, and continuous data analytics. In this presentation, we provide a rundown of our research topics throughout the years and explain how we turned them into practice at the Software-as-a-Service company Baqend.
 
We first present an overview over the system space that we are concerned with and the high-level goals we pursue in our work. We then go into detail on how the Orestes architecture combines web caching with traditional data management techniques to accelerate primary key access in globally distributed setups. Next, we cover the InvaliDB architecture that employs continuous stream processing to extend the Orestes approach to complex database queries. Finally, we explain how the cloud service Speed Kit (https://speed-kit.com) turns our research into practice by accelerating more than 100 million users per month. We close with ongoing and future work, including the Beaconnect project that revolves around continuous analytics over real-user tracking data with Apache Flink.]]></description>
      <itunes:summary><![CDATA[Database research at the University of Hamburg is centered around scalable technologies for cloud data management and connects the dots between traditional database systems, web caching, and continuous data analytics. In this presentation, we provide a rundown of our research topics throughout the years and explain how we turned them into practice at the Software-as-a-Service company Baqend.
 
We first present an overview over the system space that we are concerned with and the high-level goals we pursue in our work. We then go into detail on how the Orestes architecture combines web caching with traditional data management techniques to accelerate primary key access in globally distributed setups. Next, we cover the InvaliDB architecture that employs continuous stream processing to extend the Orestes approach to complex database queries. Finally, we explain how the cloud service Speed Kit (https://speed-kit.com) turns our research into practice by accelerating more than 100 million users per month. We close with ongoing and future work, including the Beaconnect project that revolves around continuous analytics over real-user tracking data with Apache Flink.]]></itunes:summary>
      <itunes:duration>01:03:21</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-60355_2022-01-24_17-15.jpg?lastmodified=1663761098788"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/60355</link>
      <pubDate>Mon, 24 Jan 2022 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/60355</guid>
    </item>
    <item>
      <title>Informatikkolloquium WS20/21 - Matthias Rarey</title>
      <description/>
      <itunes:summary/>
      <itunes:duration>00:48:07</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-48260_2021-01-25_17-15.jpg?lastmodified=1663761073380"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/48260</link>
      <pubDate>Mon, 25 Jan 2021 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/48260</guid>
    </item>
    <item>
      <title>Informatikkolloquium - 11.01.2021 Gregor Kasieczka</title>
      <description/>
      <itunes:summary/>
      <itunes:duration>00:43:11</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-45855_2021-01-11_17-15.jpg?lastmodified=1663761069722"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/45855</link>
      <pubDate>Mon, 11 Jan 2021 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/45855</guid>
    </item>
    <item>
      <title>From Quantum Dynamics to Quantum Networks</title>
      <description/>
      <itunes:summary/>
      <itunes:duration>00:45:14</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-35393_2020-06-15_17-15.jpg?lastmodified=1663761047034"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/35393</link>
      <pubDate>Mon, 15 Jun 2020 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/35393</guid>
    </item>
    <item>
      <title>Ubiquitous Health: Wearable Computing Systems that Increase Quality of Life and Transform Health Care</title>
      <description/>
      <itunes:summary/>
      <itunes:duration>01:15:36</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-25907_2020-02-17_17-00.jpg?lastmodified=1663761030488"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/25907</link>
      <pubDate>Mon, 17 Feb 2020 17:00:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/25907</guid>
    </item>
    <item>
      <title>Privacy and/or Trust?</title>
      <description/>
      <itunes:summary/>
      <itunes:duration>01:08:29</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-25695_2020-01-13_16-30.jpg?lastmodified=1663761028726"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/25695</link>
      <pubDate>Mon, 13 Jan 2020 16:30:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/25695</guid>
    </item>
    <item>
      <title>Künstliche Intelligenz zwischen Science und Fiction</title>
      <description/>
      <itunes:summary/>
      <itunes:duration>01:17:05</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-25552_2019-12-09_17-15.jpg?lastmodified=1663761027972"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/25552</link>
      <pubDate>Mon, 09 Dec 2019 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/25552</guid>
    </item>
    <item>
      <title>Technological development and application of Deep Learning in Biomedicine</title>
      <description/>
      <itunes:summary/>
      <itunes:duration>00:41:44</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-25551_2019-12-02_17-00.jpg?lastmodified=1663761027943"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/25551</link>
      <pubDate>Mon, 02 Dec 2019 17:00:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/25551</guid>
    </item>
    <item>
      <title>Blind multi-microphone noise reduction and dereverberation algorithms for speech communication applications</title>
      <description/>
      <itunes:summary/>
      <itunes:duration>01:01:36</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-25347_2019-11-18_17-00.jpg?lastmodified=1663761025818"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/25347</link>
      <pubDate>Mon, 18 Nov 2019 17:00:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/25347</guid>
    </item>
    <item>
      <title>Deep Machines That Know When They Do not Know</title>
      <description><![CDATA[
  

 

Our minds make inferences that appear to go far beyond standard machine learning. Whereas people can learn richer representations and use them for a wider range of learning tasks, machine learning algorithms have been mainly employed in a stand-alone context, constructing a single function from a table of training examples. In this talk, I shall touch upon a view on machine learning, called probabilistic programming, that can help capturing these human learning aspects by combining high-level programming languages and probabilistic machine learning — the high-level language helps reducing the cost of modelling and probabilities help quantifying when a machine does not know something. Since probabilistic inference remains intractable, existing approaches leverage deep learning for inference. Instead of “going down the full neural road,” I shall argue to use sum-product networks, a deep but tractable architecture for probability distributions. This can speed up inference in probabilistic programs, as I shall illustrate for unsupervised science understanding, and even pave the way towards automating density estimation, making machine learning accessible to a broader audience of non-experts. 

 

This talk is based on joint works with many people such as Carsten Binnig, Zoubin Ghahramani, Andreas Koch, Alejandro Molina, Sriraam Natarajan, Robert Peharz, Constantin Rothkopf, Thomas Schneider, Patrick Schramwoski, Xiaoting Shao, Karl Stelzner, Martin Trapp, Isabel Valera, Antonio Vergari, and Fabrizio Ventola. 

 

]]></description>
      <itunes:summary><![CDATA[
  

 

Our minds make inferences that appear to go far beyond standard machine learning. Whereas people can learn richer representations and use them for a wider range of learning tasks, machine learning algorithms have been mainly employed in a stand-alone context, constructing a single function from a table of training examples. In this talk, I shall touch upon a view on machine learning, called probabilistic programming, that can help capturing these human learning aspects by combining high-level programming languages and probabilistic machine learning — the high-level language helps reducing the cost of modelling and probabilities help quantifying when a machine does not know something. Since probabilistic inference remains intractable, existing approaches leverage deep learning for inference. Instead of “going down the full neural road,” I shall argue to use sum-product networks, a deep but tractable architecture for probability distributions. This can speed up inference in probabilistic programs, as I shall illustrate for unsupervised science understanding, and even pave the way towards automating density estimation, making machine learning accessible to a broader audience of non-experts. 

 

This talk is based on joint works with many people such as Carsten Binnig, Zoubin Ghahramani, Andreas Koch, Alejandro Molina, Sriraam Natarajan, Robert Peharz, Constantin Rothkopf, Thomas Schneider, Patrick Schramwoski, Xiaoting Shao, Karl Stelzner, Martin Trapp, Isabel Valera, Antonio Vergari, and Fabrizio Ventola. 

 

]]></itunes:summary>
      <itunes:duration>00:54:22</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-24641_2019-05-20_17-15.jpg?lastmodified=1663761018211"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/24641</link>
      <enclosure length="1674056411" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-24641_2019-05-20_17-15.mp4"/>
      <pubDate>Mon, 20 May 2019 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/24641</guid>
    </item>
    <item>
      <title>Opportunistic Networks - Challenges and Opportunities</title>
      <description><![CDATA[
Opportunistic networks are comprised of end-user devices, connected directly to each other through localized wireless communication technologies. They enable the direct communication between those devices without the need of an infrastructure. This is their main challenge and their main advantage at the same time. One one hand, they enable communication where infrastructure is not available or has been damaged. On the other hand, they result in high possible delays and cannot guarantee delivery. In this talk we discuss the main properties and challenges of opportunistic networks, where they come from and current research trends.]]></description>
      <itunes:summary><![CDATA[
Opportunistic networks are comprised of end-user devices, connected directly to each other through localized wireless communication technologies. They enable the direct communication between those devices without the need of an infrastructure. This is their main challenge and their main advantage at the same time. One one hand, they enable communication where infrastructure is not available or has been damaged. On the other hand, they result in high possible delays and cannot guarantee delivery. In this talk we discuss the main properties and challenges of opportunistic networks, where they come from and current research trends.]]></itunes:summary>
      <itunes:duration>00:46:18</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-24564_2019-05-06_17-15.jpg?lastmodified=1663761017538"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/24564</link>
      <enclosure length="1433725327" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-24564_2019-05-06_17-15.mp4"/>
      <pubDate>Mon, 06 May 2019 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/24564</guid>
    </item>
    <item>
      <title>Known Operator Learning? A new Paradigm for Machine Learning in Signal Processing &amp; Physics</title>
      <description><![CDATA[
Please not that the correct license for this recording is CC-BY-DE 3.0 and that the Slides shown during the lecture are available as a PDF file under the download tab. 

 

We describe a new approach for incorporating prior knowledge into any machine learning algorithm. We aim at applications in physics and signal processing in which we know that certain operations must beembedded into the algorithm. Any operation that allows computation of a gradient or sub-gradient towards its inputs is suited for our mathematical framework. We proof that the inclusion of such prior knowledge reduces maximal error bounds and reduces the number of free parameters. We apply this approach to various tasks ranging from CT image reconstruction over vessel segmentation to the derivation of previously unknown imaging algorithms. As such the concept is widely applicable for many researchers in physics, imaging, and signal processing. We assume that our analysis will support further investigation of this idea in many other fields of physics, imaging, and signal processing.]]></description>
      <itunes:summary><![CDATA[
Please not that the correct license for this recording is CC-BY-DE 3.0 and that the Slides shown during the lecture are available as a PDF file under the download tab. 

 

We describe a new approach for incorporating prior knowledge into any machine learning algorithm. We aim at applications in physics and signal processing in which we know that certain operations must beembedded into the algorithm. Any operation that allows computation of a gradient or sub-gradient towards its inputs is suited for our mathematical framework. We proof that the inclusion of such prior knowledge reduces maximal error bounds and reduces the number of free parameters. We apply this approach to various tasks ranging from CT image reconstruction over vessel segmentation to the derivation of previously unknown imaging algorithms. As such the concept is widely applicable for many researchers in physics, imaging, and signal processing. We assume that our analysis will support further investigation of this idea in many other fields of physics, imaging, and signal processing.]]></itunes:summary>
      <itunes:duration>00:46:29</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-24475_2019-04-15_17-15.jpg?lastmodified=1663761017149"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/24475</link>
      <enclosure length="1427561712" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-24475_2019-04-15_17-15.mp4"/>
      <pubDate>Mon, 15 Apr 2019 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/24475</guid>
    </item>
    <item>
      <title>Empirical Studies into Modelling in Software Development in the Age of Big Data and AI</title>
      <description><![CDATA[
  

 

The slides shown during the lecture are available as a PDF file under the download tab. 

 

Modeling is a common part of modern day many engineering practices. 

 

However, little evidence exists in Software Engineering about how models are made, how models are used and how they help in producing better software. In this talk, I will present highlights from my last 15+ years of research in the area of software modelling, model-driven development and UML. 

 

Topics that will be addressed: 

 
 
 
 
 Introduction: 
 
   
   
 
   How are models used in software development? Including how the use and purposes of models evolve over time. 
   
 
   Dispelling 'fake news' about UML (discussing arguments in favour and against software modelling) 
   
 
    
 
 
 Do UML models actually help in creating better software? 
 
   
   
 
   To this end, I will overview some of the ongoing work which is based on a dataset of almost 100.000 UML models (and can be found at http://oss.models-db.com/ ). This dataset opens up many possibilities for 'big data science'-approaches to researching software design. 
   
 
   I will present several steps in the construction and analyis of this dataset focussing on steps where we used machine learning. These steps include: the automated extraction, recognition and classification of UML-diagrams and Software Architecture Design Documents from GitHub, the analysis of graph-patterns (motiv's) in reverse engineered software designs, the use of machine learning in abstracting reverse engineered diagrams into 'forward designed' design diagrams. 
   
 
    
 
 
 I will present a prototype of a software design environment for smartboards which enables user interaction via touch and voice. 
 
 
 

 

]]></description>
      <itunes:summary><![CDATA[
  

 

The slides shown during the lecture are available as a PDF file under the download tab. 

 

Modeling is a common part of modern day many engineering practices. 

 

However, little evidence exists in Software Engineering about how models are made, how models are used and how they help in producing better software. In this talk, I will present highlights from my last 15+ years of research in the area of software modelling, model-driven development and UML. 

 

Topics that will be addressed: 

 
 
 
 
 Introduction: 
 
   
   
 
   How are models used in software development? Including how the use and purposes of models evolve over time. 
   
 
   Dispelling 'fake news' about UML (discussing arguments in favour and against software modelling) 
   
 
    
 
 
 Do UML models actually help in creating better software? 
 
   
   
 
   To this end, I will overview some of the ongoing work which is based on a dataset of almost 100.000 UML models (and can be found at http://oss.models-db.com/ ). This dataset opens up many possibilities for 'big data science'-approaches to researching software design. 
   
 
   I will present several steps in the construction and analyis of this dataset focussing on steps where we used machine learning. These steps include: the automated extraction, recognition and classification of UML-diagrams and Software Architecture Design Documents from GitHub, the analysis of graph-patterns (motiv's) in reverse engineered software designs, the use of machine learning in abstracting reverse engineered diagrams into 'forward designed' design diagrams. 
   
 
    
 
 
 I will present a prototype of a software design environment for smartboards which enables user interaction via touch and voice. 
 
 
 

 

]]></itunes:summary>
      <itunes:duration>01:01:17</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-24388_2019-04-01_17-15.jpg?lastmodified=1663761016370"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/24388</link>
      <enclosure length="1898812719" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-24388_2019-04-01_17-15.mp4"/>
      <pubDate>Mon, 01 Apr 2019 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/24388</guid>
    </item>
    <item>
      <title>The innovations, industry problems, and research challenges open source has given us</title>
      <description><![CDATA[
Please note that the slides shown during the lecture are available as a PDF Download under the download tab. Also please not that the correct license for this recording is: CC-BY-DE 3.0 

 

Open source has given the world legal, engineering, and business strategy innovations. Going new and better ways, however, also creates potential problems for industry, which when framed positively, turn into research challenges. In this talk, I will lay out the breadth of innovation that open source has given us as well as the resulting industry problems and challenges. The audience can steer where to focus.]]></description>
      <itunes:summary><![CDATA[
Please note that the slides shown during the lecture are available as a PDF Download under the download tab. Also please not that the correct license for this recording is: CC-BY-DE 3.0 

 

Open source has given the world legal, engineering, and business strategy innovations. Going new and better ways, however, also creates potential problems for industry, which when framed positively, turn into research challenges. In this talk, I will lay out the breadth of innovation that open source has given us as well as the resulting industry problems and challenges. The audience can steer where to focus.]]></itunes:summary>
      <itunes:duration>01:02:12</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-24253_2019-01-28_17-15.jpg?lastmodified=1663761015452"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/24253</link>
      <enclosure length="1694029136" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-24253_2019-01-28_17-15.mp4"/>
      <pubDate>Mon, 28 Jan 2019 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/24253</guid>
    </item>
    <item>
      <title>Exploring Server-side Blocking of Regions</title>
      <description><![CDATA[
Please not that the correct license for this recording is CC-0 and that the PDF slides shown during the lecture are available under the download tab. 

 

arxiv link of the publication: https://arxiv.org/abs/1805.11606 

 

Abstract: 

 

One of the Internet's greatest strengths is the degree to which it facilitates access to any of its resources from users anywhere in the world. The Internet has already become a crucial part of our life. People around the world use the internet to communicate, connect, and do business. Yet various commercial, technical, and national interests constrain universal access to information on the internet. 

 

I will discuss three reasons for the closed web that are not caused by government censorship: blocking visitors from the EU to avoid GDPR compliance, blocking based upon the visitor's country, and blocking due to security concerns. These decisions can have an adverse effect on the people of the blocked regions, especially for the developing regions. With many key services, such as education, commerce, and news, offered by a small number of web-based Western companies who might not view the developing world as worth the risk, these indiscriminate blanket blocking could slow the growth of blocked developing regions. 

 

As we are building the future web, we need to discuss the implication of such blocking practices and build technologies that ensure an open web for users around the world.]]></description>
      <itunes:summary><![CDATA[
Please not that the correct license for this recording is CC-0 and that the PDF slides shown during the lecture are available under the download tab. 

 

arxiv link of the publication: https://arxiv.org/abs/1805.11606 

 

Abstract: 

 

One of the Internet's greatest strengths is the degree to which it facilitates access to any of its resources from users anywhere in the world. The Internet has already become a crucial part of our life. People around the world use the internet to communicate, connect, and do business. Yet various commercial, technical, and national interests constrain universal access to information on the internet. 

 

I will discuss three reasons for the closed web that are not caused by government censorship: blocking visitors from the EU to avoid GDPR compliance, blocking based upon the visitor's country, and blocking due to security concerns. These decisions can have an adverse effect on the people of the blocked regions, especially for the developing regions. With many key services, such as education, commerce, and news, offered by a small number of web-based Western companies who might not view the developing world as worth the risk, these indiscriminate blanket blocking could slow the growth of blocked developing regions. 

 

As we are building the future web, we need to discuss the implication of such blocking practices and build technologies that ensure an open web for users around the world.]]></itunes:summary>
      <itunes:duration>00:35:37</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-23833_2018-12-10_17-15.jpg?lastmodified=1663761014427"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23833</link>
      <enclosure length="970037428" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-23833_2018-12-10_17-15.mp4"/>
      <pubDate>Mon, 10 Dec 2018 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23833</guid>
    </item>
    <item>
      <title>Microarchitectural Attacks on modern CPUs</title>
      <description><![CDATA[
From cloud servers to IoT devices, modern CPUs provide a complexmicroarchitecture to ensure high performance while easingparallelization. Unrelated services often run in parallel on the sameplatform and share resources. At the logic level, sandboxing ensuresisolation between services. However, isolation is not perfect, and sidechannels caused by the CPU's shared microarchitecture can result inunintended information leakage across processes and virtual machines.For instance, cache attacks that exploit access time variations whenretrieving data from the cache or the memory are a powerful tool toextract information from a co-located process. 

 


 This talk provides an overview of how microarchitectural features ofmodern CPUs such as shared caches and speculative execution can beabused to circumvent isolation techniques. It will be shown how theresulting attacks can be applied to extract sensitive information fromprivileged processes and even across processor boundaries. Modern attacktechniques such as cache attacks as well as the infamous Spectre andMeltdown attacks will be presented and discussed.]]></description>
      <itunes:summary><![CDATA[
From cloud servers to IoT devices, modern CPUs provide a complexmicroarchitecture to ensure high performance while easingparallelization. Unrelated services often run in parallel on the sameplatform and share resources. At the logic level, sandboxing ensuresisolation between services. However, isolation is not perfect, and sidechannels caused by the CPU's shared microarchitecture can result inunintended information leakage across processes and virtual machines.For instance, cache attacks that exploit access time variations whenretrieving data from the cache or the memory are a powerful tool toextract information from a co-located process. 

 


 This talk provides an overview of how microarchitectural features ofmodern CPUs such as shared caches and speculative execution can beabused to circumvent isolation techniques. It will be shown how theresulting attacks can be applied to extract sensitive information fromprivileged processes and even across processor boundaries. Modern attacktechniques such as cache attacks as well as the infamous Spectre andMeltdown attacks will be presented and discussed.]]></itunes:summary>
      <itunes:duration>00:53:21</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-23659_2018-11-26_17-15.jpg?lastmodified=1663761013035"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23659</link>
      <pubDate>Mon, 26 Nov 2018 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23659</guid>
    </item>
    <item>
      <title>Language dynamics in social media</title>
      <description><![CDATA[
Please note that the PDF slides shown in the lecture are available as a PDF file under the download tab. 

 

In this talk I shall outline a summary of our five year long initiative studying the temporal dynamics of various human language-like entities over the social media. Some of the topics that I plan to cover are (a) how opinion conflicts could be effectively used for incivility detection in Twitter [CSCW 2018], (b) how word borrowings can be automatically identified from social signals [EMNLP 2017] and (c) how hashtags in Twitter form compounds like natural language words (e.g., #Wikipedia+#Blackout=#WikipediaBlackout) that become way more popular than the individual constituent hashtags [CSCW 2016, Honorable Mention].]]></description>
      <itunes:summary><![CDATA[
Please note that the PDF slides shown in the lecture are available as a PDF file under the download tab. 

 

In this talk I shall outline a summary of our five year long initiative studying the temporal dynamics of various human language-like entities over the social media. Some of the topics that I plan to cover are (a) how opinion conflicts could be effectively used for incivility detection in Twitter [CSCW 2018], (b) how word borrowings can be automatically identified from social signals [EMNLP 2017] and (c) how hashtags in Twitter form compounds like natural language words (e.g., #Wikipedia+#Blackout=#WikipediaBlackout) that become way more popular than the individual constituent hashtags [CSCW 2016, Honorable Mention].]]></itunes:summary>
      <itunes:duration>00:55:54</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-23399_2018-10-22_17-15.jpg?lastmodified=1663761010430"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23399</link>
      <enclosure length="1511410543" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-23399_2018-10-22_17-15.mp4"/>
      <pubDate>Mon, 22 Oct 2018 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23399</guid>
    </item>
    <item>
      <title>Imitation learning, zero-shot learning and automated fact checking</title>
      <description><![CDATA[
The slides are available as a PDF under the download tab 

 

In this talk I will give an overview of my research in machine learning for natural language processing. I will begin by introducing my work on imitation learning, a machine learning paradigm I have used to develop novel algorithms for structure prediction that have been applied successfully to a number of tasks such as semantic parsing, natural language generation and information extraction. Key advantages are the ability to handle large output search spaces and to learn with non-decomposable loss functions. Following this, I will discuss my work on zero-shot learning using neural networks, which enabled us to learn models that can predict labels for which no data was observed during training. I will conclude with my work on automated fact-checking, a challenge we proposed in order to stimulate progress in machine learning, natural language processing and, more broadly, artificial intelligence.]]></description>
      <itunes:summary><![CDATA[
The slides are available as a PDF under the download tab 

 

In this talk I will give an overview of my research in machine learning for natural language processing. I will begin by introducing my work on imitation learning, a machine learning paradigm I have used to develop novel algorithms for structure prediction that have been applied successfully to a number of tasks such as semantic parsing, natural language generation and information extraction. Key advantages are the ability to handle large output search spaces and to learn with non-decomposable loss functions. Following this, I will discuss my work on zero-shot learning using neural networks, which enabled us to learn models that can predict labels for which no data was observed during training. I will conclude with my work on automated fact-checking, a challenge we proposed in order to stimulate progress in machine learning, natural language processing and, more broadly, artificial intelligence.]]></itunes:summary>
      <itunes:duration>01:15:29</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-23243_2018-07-09_17-15.jpg?lastmodified=1663761008475"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23243</link>
      <enclosure length="961786250" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-23243_2018-07-09_17-15.mp4"/>
      <pubDate>Mon, 09 Jul 2018 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23243</guid>
    </item>
    <item>
      <title>An Integrative, Event-Predictive Perspective on Cognition: Behavioral Evidence and Artificial Neural Network Models</title>
      <description><![CDATA[
The slides are available as a PDF under the download tab 

 

I propose an integrative theory of cognition, derived from the principle of anticipatory behavior. Acknowledging that in the end all neural activities and encodings should serve homeostasis-oriented, behavioral control purposes – including abilities of adaptation, directing attention, social interaction including communication, versatile planning, and reasoning – behavior is controlled by desired future states. However, in order to make future state imaginations tractable, compact encodings of events and event transitions are essential. When augmenting formalizations of active inference – essentially minimizing anticipated free energy – with event-oriented abstractions, useful hierarchical, event-predictive encodings can develop. I show behavioral evidence that such encodings indeed exist and dynamically unfold in our minds. Moreover, I show several computational neuro-cognitive models that learn hierarchical, event-predictive encodings from sensorimotor experiences for the purpose of optimizing flexible and highly adaptive, interactive goal-directed behavior. I end with evidence that suggests that the joint activation of event-predictive states and successions of such may indeed make us language-ready.]]></description>
      <itunes:summary><![CDATA[
The slides are available as a PDF under the download tab 

 

I propose an integrative theory of cognition, derived from the principle of anticipatory behavior. Acknowledging that in the end all neural activities and encodings should serve homeostasis-oriented, behavioral control purposes – including abilities of adaptation, directing attention, social interaction including communication, versatile planning, and reasoning – behavior is controlled by desired future states. However, in order to make future state imaginations tractable, compact encodings of events and event transitions are essential. When augmenting formalizations of active inference – essentially minimizing anticipated free energy – with event-oriented abstractions, useful hierarchical, event-predictive encodings can develop. I show behavioral evidence that such encodings indeed exist and dynamically unfold in our minds. Moreover, I show several computational neuro-cognitive models that learn hierarchical, event-predictive encodings from sensorimotor experiences for the purpose of optimizing flexible and highly adaptive, interactive goal-directed behavior. I end with evidence that suggests that the joint activation of event-predictive states and successions of such may indeed make us language-ready.]]></itunes:summary>
      <itunes:duration>00:57:16</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-23166_2018-06-25_17-15.jpg?lastmodified=1663761007917"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23166</link>
      <enclosure length="729820050" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-23166_2018-06-25_17-15.mp4"/>
      <pubDate>Mon, 25 Jun 2018 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23166</guid>
    </item>
    <item>
      <title>Wem gehören die Daten? Zugangs- und Verfügungsmodelle in der Datenökonomie</title>
      <description><![CDATA[
Daten gelten als neuen Währung in der digitalisierten Ökonomie. Gleichwohl sind Daten als solche ein intangible asset und nach europäischem Recht nicht eigentumsfähig. In Recht und Politik wird kontrovers über Dateneigentum, Urheberrechte und Vertragsrecht an Daten und deren Auswirkungen auf Innovation und die Machtverhältnisse zwischen starken Plattformen und Start Ups diskutiert. Der Vortrag stellt einige konzeptionelle Modelle einer Datenökonomie im Sinne einer Abgrenzung von Verfügungs- und Nutzungsrechten vor, darunter das Mikrobezahlungssystem (Lanier), Daten als öffentliches Gut (Morozov), das Allmendemodell (Ostrom) und das Treuhandmodell (Winnickoff). Deren Stärken und Schwächen sowie die Chancen und Grenzen ihrer Anwendbarkeit werden diskutiert.]]></description>
      <itunes:summary><![CDATA[
Daten gelten als neuen Währung in der digitalisierten Ökonomie. Gleichwohl sind Daten als solche ein intangible asset und nach europäischem Recht nicht eigentumsfähig. In Recht und Politik wird kontrovers über Dateneigentum, Urheberrechte und Vertragsrecht an Daten und deren Auswirkungen auf Innovation und die Machtverhältnisse zwischen starken Plattformen und Start Ups diskutiert. Der Vortrag stellt einige konzeptionelle Modelle einer Datenökonomie im Sinne einer Abgrenzung von Verfügungs- und Nutzungsrechten vor, darunter das Mikrobezahlungssystem (Lanier), Daten als öffentliches Gut (Morozov), das Allmendemodell (Ostrom) und das Treuhandmodell (Winnickoff). Deren Stärken und Schwächen sowie die Chancen und Grenzen ihrer Anwendbarkeit werden diskutiert.]]></itunes:summary>
      <itunes:duration>01:22:29</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-23135_2018-06-18_17-15.jpg?lastmodified=1663761007631"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23135</link>
      <pubDate>Mon, 18 Jun 2018 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/23135</guid>
    </item>
    <item>
      <title>What is Embodiment, and How Does It Affect the Way We Function?</title>
      <description><![CDATA[
The slides shown during the Talk are available as a PDF under the download tab. 

 

 

 

The way humans learn is very much affected by the fact that we have an embodiment - a physical location in the world, and the ability to change the world (both through physical interaction and through spoken and written communication with other agents).Ideas about the effect of human embodiment can be used to improve the functionality and learning strategies of artificial embodied systems, such as autonomous cars, humanoid robots, exoskeletons, search and rescue robots, etc.. I like to think about the effect of embodiment on our learning in three related ways: 

 


 1. We are able to alter the state of the scene we are observing so as to learn aspects of it that are not apparent from a first look. For example, we can move our head to look from a different angle, or squeeze, push or shake an object to investigate it. 

 


 2. Humans have a very limited communication bandwidth compared to the internal computation capacity in the brain. This means that we cannot easily perform reasoning together with other humans in the way a computer cluster can share computations. It also means that communication between humans is heavily under-determined and error-prone. 

 


 3. This limited bandwidth also means that we are forced to learn from quite few examples, and are extremely good at transfer learning and abstraction of knowledge. For example, it has been shown that a child can learn to recognize an unseen animal, e.g., elephants from a single simple drawing. This indicates that humans use very different visual learning strategies than state of the art Computer Vision systems.
 This has implications for how to design artificial embodied systems, especially systems that should collaborate with, learn from, and solve problems together with humans.
 In the context of this, I will outline a few of the projects in my group.]]></description>
      <itunes:summary><![CDATA[
The slides shown during the Talk are available as a PDF under the download tab. 

 

 

 

The way humans learn is very much affected by the fact that we have an embodiment - a physical location in the world, and the ability to change the world (both through physical interaction and through spoken and written communication with other agents).Ideas about the effect of human embodiment can be used to improve the functionality and learning strategies of artificial embodied systems, such as autonomous cars, humanoid robots, exoskeletons, search and rescue robots, etc.. I like to think about the effect of embodiment on our learning in three related ways: 

 


 1. We are able to alter the state of the scene we are observing so as to learn aspects of it that are not apparent from a first look. For example, we can move our head to look from a different angle, or squeeze, push or shake an object to investigate it. 

 


 2. Humans have a very limited communication bandwidth compared to the internal computation capacity in the brain. This means that we cannot easily perform reasoning together with other humans in the way a computer cluster can share computations. It also means that communication between humans is heavily under-determined and error-prone. 

 


 3. This limited bandwidth also means that we are forced to learn from quite few examples, and are extremely good at transfer learning and abstraction of knowledge. For example, it has been shown that a child can learn to recognize an unseen animal, e.g., elephants from a single simple drawing. This indicates that humans use very different visual learning strategies than state of the art Computer Vision systems.
 This has implications for how to design artificial embodied systems, especially systems that should collaborate with, learn from, and solve problems together with humans.
 In the context of this, I will outline a few of the projects in my group.]]></itunes:summary>
      <itunes:duration>00:49:13</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-22933_2018-05-07_17-15.jpg?lastmodified=1663761005921"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/22933</link>
      <enclosure length="627279223" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-22933_2018-05-07_17-15.mp4"/>
      <pubDate>Mon, 07 May 2018 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/22933</guid>
    </item>
    <item>
      <title>Networking the IoT with RIOT</title>
      <description><![CDATA[
The Internet of Things (IoT) is rapidly evolving from large numbers ofembedded devices that gradually connect to the Internet. Such nodes are often(very) constrained and limited to battery-powered low power lossy radio links.RIOT, the friendly operating system for the IoT, is an open source initiativefor fueling an IoT ecosystem that is not locked in with vendors or serviceoperators. 

This talk introduces the networking architecture that turns RIOT into apowerful IoT system, and enables low-power wireless deployment. RIOT networkingoffers (i) a modular architecture with generic interfaces for plugging indrivers, protocols, or entire stacks, (ii) support for multiple heterogeneousinterfaces and stacks that can concurrently operate, and (iii) GNRC, itscleanly layered, recursively composed default network stack. Focussing ondeployability, we discuss and analyse several IoT networking approachesincluding 6Low and Information Centric Networking. Selected security aspects will also be touched. 

 

The slides shown in the video are available as a PDF file as well under the download tab]]></description>
      <itunes:summary><![CDATA[
The Internet of Things (IoT) is rapidly evolving from large numbers ofembedded devices that gradually connect to the Internet. Such nodes are often(very) constrained and limited to battery-powered low power lossy radio links.RIOT, the friendly operating system for the IoT, is an open source initiativefor fueling an IoT ecosystem that is not locked in with vendors or serviceoperators. 

This talk introduces the networking architecture that turns RIOT into apowerful IoT system, and enables low-power wireless deployment. RIOT networkingoffers (i) a modular architecture with generic interfaces for plugging indrivers, protocols, or entire stacks, (ii) support for multiple heterogeneousinterfaces and stacks that can concurrently operate, and (iii) GNRC, itscleanly layered, recursively composed default network stack. Focussing ondeployability, we discuss and analyse several IoT networking approachesincluding 6Low and Information Centric Networking. Selected security aspects will also be touched. 

 

The slides shown in the video are available as a PDF file as well under the download tab]]></itunes:summary>
      <itunes:duration>00:49:17</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-22862_2018-04-23_17-15.jpg?lastmodified=1663761005411"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/22862</link>
      <enclosure length="628016011" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-22862_2018-04-23_17-15.mp4"/>
      <pubDate>Mon, 23 Apr 2018 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/22862</guid>
    </item>
    <item>
      <title>Analyzing Human Behavior in Video Sequences</title>
      <description><![CDATA[
Analyzing the behavior of humans in continuous video recordings is still a very difficult task. In the fully supervised setting, temporal models like RNNs are trained on videos that are annotated at a frame-level. Acquiring such annotations, however, is very time consuming and strong temporal models require large amounts of annotated training data. Weaker forms of supervision like transcripts are therefore investigated to learn temporal models. In this talk, I will describe some of our recent works on weakly supervised learning of actions and I will give an overview of the research activities that are conducted within the DFG research unit "Anticipating Human Behavior" at the University of Bonn.]]></description>
      <itunes:summary><![CDATA[
Analyzing the behavior of humans in continuous video recordings is still a very difficult task. In the fully supervised setting, temporal models like RNNs are trained on videos that are annotated at a frame-level. Acquiring such annotations, however, is very time consuming and strong temporal models require large amounts of annotated training data. Weaker forms of supervision like transcripts are therefore investigated to learn temporal models. In this talk, I will describe some of our recent works on weakly supervised learning of actions and I will give an overview of the research activities that are conducted within the DFG research unit "Anticipating Human Behavior" at the University of Bonn.]]></itunes:summary>
      <itunes:duration>00:44:40</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00-000_video-22261_2017-11-20_17-15.jpg?lastmodified=1663760954680"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/22261</link>
      <pubDate>Mon, 20 Nov 2017 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/22261</guid>
    </item>
    <item>
      <title>Jointly Representing Images and Text: Dependency Graphs, Word Senses, and Multimodal Embeddings</title>
      <description><![CDATA[
In this presentation, I will argue that we can make progress inlanguage/vision tasks if we represent images in structured ways,rather than just labeling objects, actions, or attributes. Inparticular, deploying structured representations from natural languageprocessing is fruitful: I will discuss how visual dependencyrepresentations (VDRs), which borrow ideas for dependency parsing, canbe used to capture how the objects in an scene interact with eachother. VDRs are useful for tasks such as image retrieval or imagedescription. Secondly, I will argue that much more fine-grainedrepresentations of actions are needed for most language/visiontasks. Again, ideas from NLP are be leveraged: I will introducealgorithms that use multimodal embeddings to perform verb sensedisambiguation in a visual context.]]></description>
      <itunes:summary><![CDATA[
In this presentation, I will argue that we can make progress inlanguage/vision tasks if we represent images in structured ways,rather than just labeling objects, actions, or attributes. Inparticular, deploying structured representations from natural languageprocessing is fruitful: I will discuss how visual dependencyrepresentations (VDRs), which borrow ideas for dependency parsing, canbe used to capture how the objects in an scene interact with eachother. VDRs are useful for tasks such as image retrieval or imagedescription. Secondly, I will argue that much more fine-grainedrepresentations of actions are needed for most language/visiontasks. Again, ideas from NLP are be leveraged: I will introducealgorithms that use multimodal embeddings to perform verb sensedisambiguation in a visual context.]]></itunes:summary>
      <itunes:duration>00:55:34</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00-000_video-22180_2017-11-06_17-15.jpg?lastmodified=1663760954657"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/22180</link>
      <pubDate>Mon, 06 Nov 2017 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/22180</guid>
    </item>
    <item>
      <title>Machine Learning and Knowledge Extraction</title>
      <description><![CDATA[
Due to a technical error the caption shown for the first two minutes of the recording shows the wrong title and speaker. 

 

The goal of Machine Learning is to learn from data, to extract and discover knowledge, and to help to make decisions under uncertainty. In automatic machine learning (aML) great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from "big data" with many training sets. However, sometimes we are confronted with a small amount of complex data sets, where aML suffers of insufficient training samples. The application of such aML approaches in complex application domains, e.g. as in health informatics seems elusive in the near future, and a good example are Gaussian processes, where aML (e.g. standard kernel machines) struggle on function extrapolation problems, which are trivial for human learners. In such situations, interactive Machine Learning (iML) can be beneficial where a human-in-the-loop helps in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where the knowledge and experience of human experts can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem reduces greatly in complexity through the input and the assistance of an human agent involved directly into the learning phase. Tackling such challenges needs a concerted effort, fostering integrative ML research between experts ranging from diverse disciplines, from data science to visualization, and both disciplinary excellence and a cross-disciplinary skill-set with international collaboration.]]></description>
      <itunes:summary><![CDATA[
Due to a technical error the caption shown for the first two minutes of the recording shows the wrong title and speaker. 

 

The goal of Machine Learning is to learn from data, to extract and discover knowledge, and to help to make decisions under uncertainty. In automatic machine learning (aML) great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from "big data" with many training sets. However, sometimes we are confronted with a small amount of complex data sets, where aML suffers of insufficient training samples. The application of such aML approaches in complex application domains, e.g. as in health informatics seems elusive in the near future, and a good example are Gaussian processes, where aML (e.g. standard kernel machines) struggle on function extrapolation problems, which are trivial for human learners. In such situations, interactive Machine Learning (iML) can be beneficial where a human-in-the-loop helps in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where the knowledge and experience of human experts can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem reduces greatly in complexity through the input and the assistance of an human agent involved directly into the learning phase. Tackling such challenges needs a concerted effort, fostering integrative ML research between experts ranging from diverse disciplines, from data science to visualization, and both disciplinary excellence and a cross-disciplinary skill-set with international collaboration.]]></itunes:summary>
      <itunes:duration>00:42:27</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-21854_2017-07-17_17-15.jpg?lastmodified=1663760994956"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/21854</link>
      <enclosure length="506176596" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_video-21854_2017-07-17_17-15.mp4"/>
      <pubDate>Mon, 17 Jul 2017 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/21854</guid>
    </item>
    <item>
      <title>Adaptive Language Technologies</title>
      <description><![CDATA[
Automatic natural language understanding enables natural communicationwith computers and computer-assisted access to the content of largedocument collections. While classical approaches to artificialintelligence anticipate all possible situations and interactions in formof a fully specified dialogue model or ontology, they are hard to adaptto new domains and do not cope well with language change.In this talk, I will motivate an adaptive, purely data-driven approachto natural language processing. Illustrated by recent researchprototypes, three stages of data-driven adaptation will be illustrated:feature/resource induction, induction of processing components andcontinuous data-driven learning.Finally, I will discuss current research and future directions regardingthe integration of symbolic and statistical knowledge, interpretabilityof language processing components as well as advanced forms ofinformation access.]]></description>
      <itunes:summary><![CDATA[
Automatic natural language understanding enables natural communicationwith computers and computer-assisted access to the content of largedocument collections. While classical approaches to artificialintelligence anticipate all possible situations and interactions in formof a fully specified dialogue model or ontology, they are hard to adaptto new domains and do not cope well with language change.In this talk, I will motivate an adaptive, purely data-driven approachto natural language processing. Illustrated by recent researchprototypes, three stages of data-driven adaptation will be illustrated:feature/resource induction, induction of processing components andcontinuous data-driven learning.Finally, I will discuss current research and future directions regardingthe integration of symbolic and statistical knowledge, interpretabilityof language processing components as well as advanced forms ofinformation access.]]></itunes:summary>
      <itunes:duration>01:05:36</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-21677_2017-06-19_17-15.jpg?lastmodified=1663760992869"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/21677</link>
      <pubDate>Mon, 19 Jun 2017 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/21677</guid>
    </item>
    <item>
      <title>Visually Browsing Millions of Images using Image Graphs</title>
      <description><![CDATA[
In the past an efficient and satisfactory image search was only possible by using a combation of keywords and low-level visual image features. Recently Convolutional Neural Networks (CNNs) have enabled automatic understanding of images. This results in a multitude of new applications and improved visual image search systems. This talk provides an overview of the different methods for image search, gives an overview of the principle of CNNs and shows how future image search systems could look like. We present a new approach to visually explore very large sets of untagged images. High quality image descriptors are generated using transformed activations of a convolutional neural network. These features are used to model image similarities, from which a hierarchical image graph is build. We show how such a graph can be constructed efficiently. Best user experience for navigating this graph is achieved by projecting sub-graphs onto a regular 2D-image map. This allows users to explore the image graph similar to navigation services.]]></description>
      <itunes:summary><![CDATA[
In the past an efficient and satisfactory image search was only possible by using a combation of keywords and low-level visual image features. Recently Convolutional Neural Networks (CNNs) have enabled automatic understanding of images. This results in a multitude of new applications and improved visual image search systems. This talk provides an overview of the different methods for image search, gives an overview of the principle of CNNs and shows how future image search systems could look like. We present a new approach to visually explore very large sets of untagged images. High quality image descriptors are generated using transformed activations of a convolutional neural network. These features are used to model image similarities, from which a hierarchical image graph is build. We show how such a graph can be constructed efficiently. Best user experience for navigating this graph is achieved by projecting sub-graphs onto a regular 2D-image map. This allows users to explore the image graph similar to navigation services.]]></itunes:summary>
      <itunes:duration>00:35:35</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-21538_2017-05-29_17-15.jpg?lastmodified=1663760991620"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/21538</link>
      <pubDate>Mon, 29 May 2017 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/21538</guid>
    </item>
    <item>
      <title>Making Semantic Technology Intelligent</title>
      <description><![CDATA[
Semantic technology, once a niche academic endeavour, has evolved to thepoint of becoming an important asset for big organizations and enterprises. Yet, it remains unclear how to relate existing knowledge graphs to the vast amounts of text and structured data available online. In this talk, I will argue that this challenge calls for a move towards more cognitive approaches, including 1) better knowledge integration methods, 2) an increased reliance on linguistic knowledge, and 3) neural modeling. The talk will showcase several of our contributions towards this goal. These include our work on large multilingual knowledge graphs such as the Universal Wordnet, on integrated semantic resources such as FrameBase, and on large-scale common-sense resources such as WebChild.]]></description>
      <itunes:summary><![CDATA[
Semantic technology, once a niche academic endeavour, has evolved to thepoint of becoming an important asset for big organizations and enterprises. Yet, it remains unclear how to relate existing knowledge graphs to the vast amounts of text and structured data available online. In this talk, I will argue that this challenge calls for a move towards more cognitive approaches, including 1) better knowledge integration methods, 2) an increased reliance on linguistic knowledge, and 3) neural modeling. The talk will showcase several of our contributions towards this goal. These include our work on large multilingual knowledge graphs such as the Universal Wordnet, on integrated semantic resources such as FrameBase, and on large-scale common-sense resources such as WebChild.]]></itunes:summary>
      <itunes:duration>00:46:55</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-20696_2016-11-28_17-15.jpg?lastmodified=1663760985689"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/20696</link>
      <pubDate>Mon, 28 Nov 2016 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/20696</guid>
    </item>
    <item>
      <title>Cognitive Computer Vision for Mobile Systems</title>
      <description><![CDATA[
Sadly there was some interference with our wireless microphones during the recording. 

 

 

 

  

 

The amount of digital images in our daily live has grown exponentially during the last years, cameras are low-cost sensors which are present everywhere, and billions of images are daily shared on social media. Also the industrial interest in methods for digital image and video processing
 is increasing strongly. As a consequence, the need for algorithms that automatically improve, analyze, and interpret images also rises more and more. Fortunately, the research field of computer vision has also advanced strongly during the last decade and many things which were not feasible a few years ago are suddenly achievable. However, when it comes to seemingly simple daily live questions such as "how many objects are on this table?", current systems reach their limits and it shows that the human visual system still outperforms machines clearly. - In my research group, we focus on biologically inspired methods for computer vision. That means, we develop algorithms that follow mechanisms of human vision, outgoing from psychophysical and neurobiological findings. Topics of our research include the detection of saliency in images and the discovery of objects. We focus on methods for mobile systems, such as wearable cameras or autonomous service robots. 

 

]]></description>
      <itunes:summary><![CDATA[
Sadly there was some interference with our wireless microphones during the recording. 

 

 

 

  

 

The amount of digital images in our daily live has grown exponentially during the last years, cameras are low-cost sensors which are present everywhere, and billions of images are daily shared on social media. Also the industrial interest in methods for digital image and video processing
 is increasing strongly. As a consequence, the need for algorithms that automatically improve, analyze, and interpret images also rises more and more. Fortunately, the research field of computer vision has also advanced strongly during the last decade and many things which were not feasible a few years ago are suddenly achievable. However, when it comes to seemingly simple daily live questions such as "how many objects are on this table?", current systems reach their limits and it shows that the human visual system still outperforms machines clearly. - In my research group, we focus on biologically inspired methods for computer vision. That means, we develop algorithms that follow mechanisms of human vision, outgoing from psychophysical and neurobiological findings. Topics of our research include the detection of saliency in images and the discovery of objects. We focus on methods for mobile systems, such as wearable cameras or autonomous service robots. 

 

]]></itunes:summary>
      <itunes:duration>00:44:34</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_video-20604_2016-11-14_17-15.jpg?lastmodified=1663760985091"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/20604</link>
      <pubDate>Mon, 14 Nov 2016 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/20604</guid>
    </item>
    <item>
      <title>Graph-Mining zur Schwachstellensuche</title>
      <description><![CDATA[
Die Erkennung von Schwachstellen in Software ist ein Schlüssel zumSchutz von IT-Systemen. Leider können nur wenige Arten vonSicherheitsfehlern automatisch erkannt werden. Die meistenSchwachstellen werden nur durch eine aufwändige manuelle Analyseentdeckt, wie zum Beispiel in den Fällen von Heartbleed undShellshock. Der Vortrag stellt einen neuen Ansatz zur Suche nachSchwachstellen vor, der einen menschlichen Experten unterstütztanstatt ihn zu ersetzen. Der Ansatz kombiniert klassische Konzepte ausder Programmanalyse mit modernen Verfahren zur Analyse von Graphen undermöglicht es, Schwachstellen in Software besser zu modellieren und zusuchen. In einer empirischen Untersuchung konnten mit diesem Ansatz 18unbekannte Schwachstellen im Linux-Kern identifiziert werden.]]></description>
      <itunes:summary><![CDATA[
Die Erkennung von Schwachstellen in Software ist ein Schlüssel zumSchutz von IT-Systemen. Leider können nur wenige Arten vonSicherheitsfehlern automatisch erkannt werden. Die meistenSchwachstellen werden nur durch eine aufwändige manuelle Analyseentdeckt, wie zum Beispiel in den Fällen von Heartbleed undShellshock. Der Vortrag stellt einen neuen Ansatz zur Suche nachSchwachstellen vor, der einen menschlichen Experten unterstütztanstatt ihn zu ersetzen. Der Ansatz kombiniert klassische Konzepte ausder Programmanalyse mit modernen Verfahren zur Analyse von Graphen undermöglicht es, Schwachstellen in Software besser zu modellieren und zusuchen. In einer empirischen Untersuchung konnten mit diesem Ansatz 18unbekannte Schwachstellen im Linux-Kern identifiziert werden.]]></itunes:summary>
      <itunes:duration>01:22:28</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_Prof.Dr.KonradRieck_2016-07-04_19-58.jpg?lastmodified=1663760964764"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/19635</link>
      <enclosure length="580314893" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_Prof.Dr.KonradRieck_2016-07-04_19-58.mp4"/>
      <pubDate>Mon, 04 Jul 2016 19:58:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/19635</guid>
    </item>
    <item>
      <title>Embodied Affective Decision Making in Robots</title>
      <description><![CDATA[
The importance of the role of affect (e.g. drives, motivations, emotions) in decision making has been increasingly recognized by researchers in the fields of neuroscience and psychology in recent years. It has also been of contemporary interest to roboticists with a focus on issues concerning ‘embodiment’. In this talk, I will present work carried out over several projects that focuses on affective mechanisms used to guide decision making in robots. This talk will consist of two parts covering past and recent research in the area of embodied affective decision making in robots. In the first part, drawing from examples of my own, and my PhD students’ work, I will provide examples from evolutionary robotics and human-robot interaction as to how affective mechanisms can be exploited in robotics to produce adaptive behavior and decision making, i.e. that which is not the direct product of learning. In the second part, I will discuss recent work on tactile interaction between humans and robots. The ability to reliably convey and interpret emotional signals through touch (as a form of embodied affective interaction) provides an important source of information for appropriate social decision making. Recent results from a human-robot tactile interaction study will be presented that show how emotions can be expressed according to a number of different dimensions amenable to tactile sensing.]]></description>
      <itunes:summary><![CDATA[
The importance of the role of affect (e.g. drives, motivations, emotions) in decision making has been increasingly recognized by researchers in the fields of neuroscience and psychology in recent years. It has also been of contemporary interest to roboticists with a focus on issues concerning ‘embodiment’. In this talk, I will present work carried out over several projects that focuses on affective mechanisms used to guide decision making in robots. This talk will consist of two parts covering past and recent research in the area of embodied affective decision making in robots. In the first part, drawing from examples of my own, and my PhD students’ work, I will provide examples from evolutionary robotics and human-robot interaction as to how affective mechanisms can be exploited in robotics to produce adaptive behavior and decision making, i.e. that which is not the direct product of learning. In the second part, I will discuss recent work on tactile interaction between humans and robots. The ability to reliably convey and interpret emotional signals through touch (as a form of embodied affective interaction) provides an important source of information for appropriate social decision making. Recent results from a human-robot tactile interaction study will be presented that show how emotions can be expressed according to a number of different dimensions amenable to tactile sensing.]]></itunes:summary>
      <itunes:duration>01:06:43</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_Dr.RobertLowe_2016-05-03_13-42.jpg?lastmodified=1663760957375"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/19338</link>
      <pubDate>Tue, 03 May 2016 13:42:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/19338</guid>
    </item>
    <item>
      <title>The Challenges of Affect Detection</title>
      <description><![CDATA[
Software engineering involves a large amount of social interaction, as programmers often need to cooperate with others, whether directly or indirectly. However, we have become fully aware of the importance of social aspects in software engineering activities only over the last decade. In fact, it was not until the recent diffusion and massive adoption of social media that we could witness the rise of the “social programmer” and the surrounding ecosystem. Social media has deeply influenced the design of software development-oriented tools such as GitHub (i.e., a social coding site) and Stack Overflow (i.e., a community-based question answering site). Stack Overflow, in particular, is an example of an online community where social programmers do networking by reading and answering others’ questions, thus participating in the creation and diffusion of crowdsourced knowledge and software documentation. 

 

One of the biggest drawbacks of computer-mediated communication is to appropriately convey sentiment through text. While display rules for emotions exist and are widely accepted for interaction in traditional face-to-face communication, web users are not necessarily prepared for effectively dealing with the social media barriers to non-verbal communication. Thus, the design of systems and mechanisms for the development of emotional awareness between communicators is an important technical and social challenge for research related to computer-supported collaboration and social computing. 

 

As a consequence, a recent research trend has emerged to study the role of affect in the social programmer ecosystem, by applying sentiment analysis to the content available in sites such as GitHub and Stack Overflow, as well as in other asynchronous communication artifacts such as comments in issue tracking systems. This talk surveys the state-of-the-art in sentiment analysis tools and examines to what extent they are able to detect affective expressions in communication traces left by software developers. A discussion is offered about the advantages and limitations of choosing sentiment polarity and strength as an appropriate way to operationalize affective states in empirical studies. Finally, open challenges and opportunities of affective software engineering are discussed, with special focus on the need to combine cognitive emotion modeling with affective computing and natural language processing techniques to build large-scale, robust approaches for sentiment detection in software engineering.]]></description>
      <itunes:summary><![CDATA[
Software engineering involves a large amount of social interaction, as programmers often need to cooperate with others, whether directly or indirectly. However, we have become fully aware of the importance of social aspects in software engineering activities only over the last decade. In fact, it was not until the recent diffusion and massive adoption of social media that we could witness the rise of the “social programmer” and the surrounding ecosystem. Social media has deeply influenced the design of software development-oriented tools such as GitHub (i.e., a social coding site) and Stack Overflow (i.e., a community-based question answering site). Stack Overflow, in particular, is an example of an online community where social programmers do networking by reading and answering others’ questions, thus participating in the creation and diffusion of crowdsourced knowledge and software documentation. 

 

One of the biggest drawbacks of computer-mediated communication is to appropriately convey sentiment through text. While display rules for emotions exist and are widely accepted for interaction in traditional face-to-face communication, web users are not necessarily prepared for effectively dealing with the social media barriers to non-verbal communication. Thus, the design of systems and mechanisms for the development of emotional awareness between communicators is an important technical and social challenge for research related to computer-supported collaboration and social computing. 

 

As a consequence, a recent research trend has emerged to study the role of affect in the social programmer ecosystem, by applying sentiment analysis to the content available in sites such as GitHub and Stack Overflow, as well as in other asynchronous communication artifacts such as comments in issue tracking systems. This talk surveys the state-of-the-art in sentiment analysis tools and examines to what extent they are able to detect affective expressions in communication traces left by software developers. A discussion is offered about the advantages and limitations of choosing sentiment polarity and strength as an appropriate way to operationalize affective states in empirical studies. Finally, open challenges and opportunities of affective software engineering are discussed, with special focus on the need to combine cognitive emotion modeling with affective computing and natural language processing techniques to build large-scale, robust approaches for sentiment detection in software engineering.]]></itunes:summary>
      <itunes:duration>01:02:25</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_Novielli_2016-01-25_17-15.jpg?lastmodified=1663760963006"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/18878</link>
      <enclosure length="262382382" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_Novielli_2016-01-25_17-15.mp4"/>
      <pubDate>Mon, 25 Jan 2016 17:15:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/18878</guid>
    </item>
    <item>
      <title>Developmental robotics - From babies to Robots</title>
      <description><![CDATA[
Growing theoretical and experimental research on action and language processing and on number learning and space representation clearly demonstrates the role of embodiment in cognition and language processing. In psychology and neuroscience this evidence constitutes the basis of embodied cognition, also known as grounded cognition (Pezzulo et al. 2012). In robotics, these studies have important implications for the design of linguistic capabilities in cognitive agents and robots for human-robot communication, and have led to the new interdisciplinary approach of Developmental Robotics (Cangelosi &amp; Schlesinger 2015). During the talk we will present examples of developmental robotics models and experimental results from iCub experiments on the embodiment biases in early word acquisition studies, on word order cues for lexical development and number and space interaction effects. The presentation will also discuss the implications for the “symbol grounding problem” (Cangelosi, 2012) and how embodied robots can help addressing the issue of embodied cognition and the grounding of symbol manipulation use on sensorimotor intelligence.]]></description>
      <itunes:summary><![CDATA[
Growing theoretical and experimental research on action and language processing and on number learning and space representation clearly demonstrates the role of embodiment in cognition and language processing. In psychology and neuroscience this evidence constitutes the basis of embodied cognition, also known as grounded cognition (Pezzulo et al. 2012). In robotics, these studies have important implications for the design of linguistic capabilities in cognitive agents and robots for human-robot communication, and have led to the new interdisciplinary approach of Developmental Robotics (Cangelosi &amp; Schlesinger 2015). During the talk we will present examples of developmental robotics models and experimental results from iCub experiments on the embodiment biases in early word acquisition studies, on word order cues for lexical development and number and space interaction effects. The presentation will also discuss the implications for the “symbol grounding problem” (Cangelosi, 2012) and how embodied robots can help addressing the issue of embodied cognition and the grounding of symbol manipulation use on sensorimotor intelligence.]]></itunes:summary>
      <itunes:duration>00:55:54</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_Cangelosi_2015-04-16_16-15.jpg?lastmodified=1663760955800"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/17527</link>
      <enclosure length="239309126" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_Cangelosi_2015-04-16_16-15.mp4"/>
      <pubDate>Thu, 16 Apr 2015 16:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/17527</guid>
    </item>
    <item>
      <title>Fooling your Senses for (Super-) Natural User Interfaces</title>
      <description><![CDATA[
In his essay “The Ultimate Display” from 1965, Ivan E. Sutherland states that “The ultimate display would [...] be a room within which the computer can control the existence of matter [...]“. This general notion of a computer-mediated or virtual reality, in which synthetic objects or the entire virtual environment get indistinguishable from the real world, dates back to Plato’s “The Allegory of the Cave” and has been reconsidered again and again in science fiction literature as well as the movie industry. 

 

For instance, virtual reality is often used to question whether we truly “know” if our perceptions are real or not. Movies like “The Matrix” or the fictional holodeck from the Star Trek universe are prominent examples of these kind of perceptual ambiguities. Furthermore, in movies like Steven Spielberg’s “Minority Report” or Jon Favreau’s “Iron Man 2″ actors can seamlessly use free-hand gestures in space combined with speech to manipulate 3D holographic projections, while they also perceive haptic feedback when touching the virtual objects. 

 

In my talk I will revisit some of the most visually impressive 3D user interfaces and experiences of such fictional ultimate displays. As a matter of fact, we cannot let a computer fully control the existence of matter, but we can fool our senses and give a user the illusion that the computer can after all. I will show how different ultimate displays can be implemented with current state-of-the-art technology by exploiting perceptually-inspired interfaces. However, we will see that the resulting ultimate displays are not so ultimate at all, but pose novel interesting future research challenges and questions.]]></description>
      <itunes:summary><![CDATA[
In his essay “The Ultimate Display” from 1965, Ivan E. Sutherland states that “The ultimate display would [...] be a room within which the computer can control the existence of matter [...]“. This general notion of a computer-mediated or virtual reality, in which synthetic objects or the entire virtual environment get indistinguishable from the real world, dates back to Plato’s “The Allegory of the Cave” and has been reconsidered again and again in science fiction literature as well as the movie industry. 

 

For instance, virtual reality is often used to question whether we truly “know” if our perceptions are real or not. Movies like “The Matrix” or the fictional holodeck from the Star Trek universe are prominent examples of these kind of perceptual ambiguities. Furthermore, in movies like Steven Spielberg’s “Minority Report” or Jon Favreau’s “Iron Man 2″ actors can seamlessly use free-hand gestures in space combined with speech to manipulate 3D holographic projections, while they also perceive haptic feedback when touching the virtual objects. 

 

In my talk I will revisit some of the most visually impressive 3D user interfaces and experiences of such fictional ultimate displays. As a matter of fact, we cannot let a computer fully control the existence of matter, but we can fool our senses and give a user the illusion that the computer can after all. I will show how different ultimate displays can be implemented with current state-of-the-art technology by exploiting perceptually-inspired interfaces. However, we will see that the resulting ultimate displays are not so ultimate at all, but pose novel interesting future research challenges and questions.]]></itunes:summary>
      <itunes:duration>00:45:48</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_steinicke_2014-10-27_17-00.jpg?lastmodified=1663760981617"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/16819</link>
      <enclosure length="196285554" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_steinicke_2014-10-27_17-00.mp4"/>
      <pubDate>Mon, 27 Oct 2014 17:00:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/16819</guid>
    </item>
    <item>
      <title>Social media for developers: How tools shape the way we work</title>
      <description><![CDATA[
Social Media such as Twitter, GitHub, and Stack Overflow have changed how developers work: they get more done in less time andcollaborate at unprecedented scale and speed. How and why did this happen, and what other effects followed from this change? Toanswer these questions, this talk discusses two empirical studies on how developers use GitHub and Twitter, and shows how these sitesinfluence developer behavior.]]></description>
      <itunes:summary><![CDATA[
Social Media such as Twitter, GitHub, and Stack Overflow have changed how developers work: they get more done in less time andcollaborate at unprecedented scale and speed. How and why did this happen, and what other effects followed from this change? Toanswer these questions, this talk discusses two empirical studies on how developers use GitHub and Twitter, and shows how these sitesinfluence developer behavior.]]></itunes:summary>
      <itunes:duration>00:52:24</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/00.000_singer_2014-10-20_17-15.jpg?lastmodified=1663760981310"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/16767</link>
      <enclosure length="215167016" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/00.000_singer_2014-10-20_17-15.mp4"/>
      <pubDate>Mon, 20 Oct 2014 17:15:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/16767</guid>
    </item>
    <item>
      <title>Machine Learning of Motor Skills for Robotics</title>
      <description><![CDATA[
Autonomous robots that can assist humans in situations of daily life have beena long standing vision of robotics, artificial intelligence, and cognitivesciences. A first step towards this goal is to create robots that can learntasks triggered by visual stimuli from higher level instruction. However,learning techniques have yet to live up to this promise as only few methodsmanage to scale to high-dimensional manipulator or humanoid robots. In thistalk, we investigate a general framework suitable for learning motor skills inrobotics including both manipulation of static and dynamic objects thatare perceived using vision. The resulting approach relies on a representation of motorskills by parameterized motor primitive policies acting as building blocks of movementgeneration, and a learned task execution module that transforms these movementsinto motor commands. We discuss task-appropriate learning approaches for imitationlearning, model learning and reinforcement learning for robots with manydegrees of freedom that perceive the manipulated objects using robot vision.Empirical evaluations on a several robot systems illustratethe effectiveness and applicability to learning control on an anthropomorphicrobot arm. These robot motor skills range from basic visuo-motor skills to playingrobot table tennis against a human being and manipulation of various objects.]]></description>
      <itunes:summary><![CDATA[
Autonomous robots that can assist humans in situations of daily life have beena long standing vision of robotics, artificial intelligence, and cognitivesciences. A first step towards this goal is to create robots that can learntasks triggered by visual stimuli from higher level instruction. However,learning techniques have yet to live up to this promise as only few methodsmanage to scale to high-dimensional manipulator or humanoid robots. In thistalk, we investigate a general framework suitable for learning motor skills inrobotics including both manipulation of static and dynamic objects thatare perceived using vision. The resulting approach relies on a representation of motorskills by parameterized motor primitive policies acting as building blocks of movementgeneration, and a learned task execution module that transforms these movementsinto motor commands. We discuss task-appropriate learning approaches for imitationlearning, model learning and reinforcement learning for robots with manydegrees of freedom that perceive the manipulated objects using robot vision.Empirical evaluations on a several robot systems illustratethe effectiveness and applicability to learning control on an anthropomorphicrobot arm. These robot motor skills range from basic visuo-motor skills to playingrobot table tennis against a human being and manipulation of various objects.]]></itunes:summary>
      <itunes:duration>01:09:35</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/12.345_JanPeters_2014-07-10_13-06.jpg?lastmodified=1663761143444"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/16546</link>
      <enclosure length="308156208" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/12.345_JanPeters_2014-07-10_13-06.mp4"/>
      <pubDate>Thu, 10 Jul 2014 13:06:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/16546</guid>
    </item>
    <item>
      <title>Die Hamburg Bit-Bots: Aufbruch ins Unbekannte</title>
      <description><![CDATA[
Im April 2014 reiste das studentische Team der Hamburg Bit-Bots nachTeheran (Iran) und nahm dort an den RoboCup IranOpen teil.Trotz sorgfältiger Vorbereitung blieb vieles im vorhinein ungeklärt, daeindeutige Informationen über dieses Land schwer zu erhalten sind.Insbesondere für die drei Frauen des dorthin gereisten Teams ergabensich viele Fragen in Bezug auf die Kommunikation und Verhaltensweisen,die sich vor Ort angekommen zur Überraschung aller allerdings alsunproblematischer als befürchtet erwiesen.Der Wettbewerb zeigte dann dem Team Probleme der diesjährigenRegeländerungen auf und motivierte sie, den geplanten Bau des erstenselbstentworfenen Roboters vorzuziehen, um bereits im Juli einenfunktionsfähigen etwa 80cm großen Roboter zu haben, der damit knapp 40cmgrößer ist als die bisher genutzten. 

 

Im Vortrag wird es zuerst um die im Iran gesammelten Eindrücke gehen,wie sich die Teammitglieder vor Ortgefühlt haben und wie sehr vorherige Informationen und die Realitätvoneinander abwichen. 

 

Anschließend wird über die gezogenen Erkenntnisse aus dem Wettbewerbberichtet sowie Planung und Bau von dem neuen Roboter mit dem CodenamenGOAL vorgestellt.]]></description>
      <itunes:summary><![CDATA[
Im April 2014 reiste das studentische Team der Hamburg Bit-Bots nachTeheran (Iran) und nahm dort an den RoboCup IranOpen teil.Trotz sorgfältiger Vorbereitung blieb vieles im vorhinein ungeklärt, daeindeutige Informationen über dieses Land schwer zu erhalten sind.Insbesondere für die drei Frauen des dorthin gereisten Teams ergabensich viele Fragen in Bezug auf die Kommunikation und Verhaltensweisen,die sich vor Ort angekommen zur Überraschung aller allerdings alsunproblematischer als befürchtet erwiesen.Der Wettbewerb zeigte dann dem Team Probleme der diesjährigenRegeländerungen auf und motivierte sie, den geplanten Bau des erstenselbstentworfenen Roboters vorzuziehen, um bereits im Juli einenfunktionsfähigen etwa 80cm großen Roboter zu haben, der damit knapp 40cmgrößer ist als die bisher genutzten. 

 

Im Vortrag wird es zuerst um die im Iran gesammelten Eindrücke gehen,wie sich die Teammitglieder vor Ortgefühlt haben und wie sehr vorherige Informationen und die Realitätvoneinander abwichen. 

 

Anschließend wird über die gezogenen Erkenntnisse aus dem Wettbewerbberichtet sowie Planung und Bau von dem neuen Roboter mit dem CodenamenGOAL vorgestellt.]]></itunes:summary>
      <itunes:duration>01:11:06</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/12.345_HamburgBit-Bots_2014-06-10_15-24.jpg?lastmodified=1663761143419"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/16448</link>
      <enclosure length="309855878" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/12.345_HamburgBit-Bots_2014-06-10_15-24.mp4"/>
      <pubDate>Tue, 10 Jun 2014 15:24:00 +0200</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/16448</guid>
    </item>
    <item>
      <title>Logic for Design Science Research Theory Accumulation</title>
      <description><![CDATA[
The paper introduces a structured logic for iterative and incremental accumulation of a design theory during a research project and across research programs. The logic is proposed to help researchers understand the links between parallel search spaces related to a particular design and linking to theoretical knowledge bases produced by previous search processes. The proposition rests on the notion that representing the structure and logic of design science research (DSR) theory using CIMO enables the elements of the knowledge base to be more easily evaluated, combined, and transferred between related search spaces. We view DSR theory development as an iterative and incremental social process and propose a structured logic as a means to better understand, but also guide, DSR theory development over time.]]></description>
      <itunes:summary><![CDATA[
The paper introduces a structured logic for iterative and incremental accumulation of a design theory during a research project and across research programs. The logic is proposed to help researchers understand the links between parallel search spaces related to a particular design and linking to theoretical knowledge bases produced by previous search processes. The proposition rests on the notion that representing the structure and logic of design science research (DSR) theory using CIMO enables the elements of the knowledge base to be more easily evaluated, combined, and transferred between related search spaces. We view DSR theory development as an iterative and incremental social process and propose a structured logic as a means to better understand, but also guide, DSR theory development over time.]]></itunes:summary>
      <itunes:duration>01:02:20</itunes:duration>
      <itunes:image href="https://lecture2go.uni-hamburg.de/images/22.222_TuureTuunanen_2014-03-17_16-09.jpg?lastmodified=1663761192430"/>
      <link>https://lecture2go.uni-hamburg.de/l2go/-/get/v/16070</link>
      <enclosure length="268220475" type="video/mp4" url="https://l2gdownload.rrz.uni-hamburg.de/abo/22.222_TuureTuunanen_2014-03-17_16-09.mp4"/>
      <pubDate>Mon, 17 Mar 2014 16:09:00 +0100</pubDate>
      <guid>https://lecture2go.uni-hamburg.de/l2go/-/get/v/16070</guid>
    </item>
  </channel>
</rss>
