Skip to main content

Audio and Acoustic Signal Processing

AASP

PhD Studentships in AI for Sound

The AI for Sound project (https://ai4s.surrey.ac.uk/) in the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey is offering the following PhD studentships in AI for Sound, available from 1 October 2021: (1) Automatic sound labelling for broadcast audio (2) Information theoretic learning for sound analysis (UK applicants) Application Deadline: 1 August 2021 CVSSP also has a number of ongoing PhD studentship opportunities for outstanding PhD candidates in all aspects of audio-visual signal processing, computer vision and machine learning, including for research related to machine learning and audio signal processing. We also welcome enquiries from self-funded and part-funded candidates. For informal enquiries on opportunities related to AI for Sound, please contact Prof Mark Plumbley (m.plumbley@surrey.ac.uk). Further information how to apply below. ----- ** PhD studentship opportunities in AI for Sound project ** (1) Automatic sound labelling for broadcast audio The aim of this project is to develop new methods for automatic labelling of sound environments and events in broadcast audio, assisting production staff to find and search through content, and helping the general public access archive content. The project will undertake a combination of interviews and user profiling, analysis of audio search datasets, and categorisation by audio experts to determine the most useful terminology for production staff and the general public as user groups.

The project will develop a taxonomy of labels, and examine the similarities and differences between each group. The project will also investigate the application of a labelled library in a production environment, examining workflows with common broadcast tools, then integrating and evaluating prototype systems. The project will also investigate methods for automatic subtitling of non-speech sounds, such as end-to-end encoder-decoder models with alignment, to directly map the acoustic signal to text sequences. Working with BBC R&D, the student will develop software tools to demonstrate the results, especially for broadcasting and the management of audiovisual archive data, and benchmark the results against human-assigned tags and descriptions of audio content. Using archive data provided by BBC R&D, the student will engage with audio production and research experts through Expert Panels, and potential end users through Focus Groups. As part of this PhD, you will have the opportunity for close day-to-day collaboration with the BBC as a member of the R&D Audio Team. Application Deadline: 1 August 2021 More information and how to apply: https://www.surrey.ac.uk/fees-and-funding/studentships/automatic-sound-labelling-broadcast-audio (2) Information theoretic learning for sound analysis (Funding Eligibility: UK applicants only) The aim of this PhD project is to investigate information theoretic methods for analysis of sounds. The Information Bottleneck (IB) method has emerged as an interesting approach to investigate learning in deep learning networks and autoencoders. This project will investigate information-theoretic approaches to analyse sound sequences, both for supervised learning methods such convolutive and recurrent networks, and unsupervised methods such as variational autoencoders. The project will also investigate direct information loss estimators, and new information-theoretic processing structures for sound processing, for example involving both feed-forward and feedback connections inspired by transfer information in biological neural networks.

Application Deadline: 1 August 2021 More information and how to apply: https://www.surrey.ac.uk/fees-and-funding/studentships/information-theoretic-learning-sound-analysis ** Other PhD studentships in the Centre for Vision, Speech and Signal Processing (CVSSP) ** CVSSP also has a number of PhD studentship opportunities for outstanding PhD candidates, including for research related to machine learning and audio signal processing. For more information see https://www.surrey.ac.uk/centre-vision-speech-signal-processing/postgraduate-research-study and scroll to "PhD studentship opportunities at CVSSP".

For informal enquiries on opportunities related to AI for Sound, please contact Prof Mark Plumbley (m.plumbley@surrey.ac.uk).

Read more

3 year Postdoc Position (E14) "Machine Learning for Speech and Audio Processing" at Universität Hamburg, Germany

The Signal Processing (SP) research group at the Universität Hamburg in Germany is hiring a Postdoc (E13/E14) "Machine Learning for Speech and Audio Processing".

The general focus of the Signal Processing (SP) research group is on developing novel signal processing and machine learning methods for speech and multimodal signals. Applications include speech communication devices such as hearing aids and voice-controlled assistants. The research associate will do research on novel signal processing and machine learning methods applied to speech and/or multimodal signals. Furthermore the research associate will help establishing degree programs in the data science context.

Please find the full job announcement with all details here.
https://www.inf.uni-hamburg.de/en/inst/ab/sp/job-offer.html

Youtube Demos of our group can be found here.
 

Read more

Lecturer/Associate Professor

We invite applications for the post of Lecturer or Associate Professor in the Signal Processing, Audio and Hearing research group at the Institute of Sound and Vibration Research (ISVR), University of Southampton. We are open to applicants from a broad range of disciplines from within the fields of Acoustic, Audio and Speech Signal Processing. https://jobs.soton.ac.uk/Vacancy.aspx?ref=1350921DA-R

The ISVR has been at the forefront of sound and vibration research and education since its inception nearly 60 years ago. We have built an international reputation through pioneering academic research, strong and enduring collaborations with industry, and generations of graduates who have spread throughout the acoustics profession. We have excellent experimental facilities including anechoic and reverberation chambers, a 6-axis motion simulator and multiple audio focused laboratories. In addition, we are currently making substantial investments to refurbish our research facilities and teaching laboratories for the next generation, and it is expected that you will contribute to help guide these critical developments.

You will have a PhD or equivalent qualifications and experience in engineering, computer science or a related discipline. A proven track record of publishing high-quality journal papers is essential. Ideally, you will have already demonstrated the ability to secure external research funding, and this will be a requirement for appointment at Associate Professor level. In any case, you will have clearly devised ideas for ambitious future research programmes that, coupled with your track record and drive, will be able to secure external funding with the ultimate objective of building a research team in your area. A willingness and versatility to branch out into new areas and conduct interdisciplinary research are strongly encouraged. 

You will demonstrate a collaborative ethos and have a positive approach towards both colleagues and students, thus mirroring the supportive and collegiate environment of the ISVR and the School of Engineering. You will also be encouraged to seek opportunities to collaborate across the University, and with other world leading institutions.  

You will have excellent communication skills and be enthusiastic to embrace innovative teaching practices to enhance students’ experience and foster independent learning. You may already have academic teaching qualifications but will otherwise be expected to complete the Postgraduate Certificate in Academic Practice. Working alongside colleagues in the ISVR and the School of Engineering, you will contribute to large taught modules across a range of engineering related subjects but may also act as module lead taking responsibility for curriculum design, module level assessment and quality assurance. You will also be expected to lead and contribute to more specialist modules. Student projects form a key part of our engineering programmes and you will be expected to propose and supervise innovative and exciting projects that provide excellent learning opportunities for our students. 

Read more

JOB: Research Fellow in Machine Learning for Sound

Research Fellow in Machine Learning for Sound Location: University of Surrey, Guildford, UK Closing Date: Wednesday 16 June 2021 (23:00 GMT) Applications are invited for a 3-year Research Fellow in Machine Learning for Sound, to work full-time on an EPSRC-funded Fellowship project "AI for Sound" (https://ai4s.surrey.ac.uk/), to start on 1 July 2021 or as soon as possible thereafter. We would particularly like to encourage applications from women, disabled and Black, Asian & Minority Ethnic candidates, since these groups are currently underrepresented in our area.

The aim of the project is to undertake research in computational analysis of everyday sounds, in the context of a set of real-world use cases in assisted living in the home, smart buildings, smart cities, and the creative sector. The postholder will be responsible for the core machine learning parts of the project, investigating advanced machine learning methods applied to sound signals. The postholder will be based in the Centre for Vision, Speech and Signal Processing (CVSSP) and work under the direction of PI (EPSRC Fellow) Prof Mark Plumbley. The successful applicant is expected to have a PhD (gained or near completion) in electronic engineering, computer science or a related subject; and research experience in machine learning and audio signal processing. Research experience in one or more of the following is desirable: deep learning; model compression; differential privacy; active learning; audio feature extraction; and publication of research software and/or datasets.

CVSSP is an International Centre of Excellence for research in Audio-Visual Machine Perception, with 170 researchers, a grant portfolio of £30M (£21M EPSRC) from EPSRC, EU, InnovateUK, charity and industry, and a turnover of £7M/annum. The Centre has state-of-the-art acoustic capture and analysis facilities and a Visual Media Lab with video and audio capture facilities supporting research in real-time video and audio processing and visualisation. CVSSP has a compute facility with over 200 GPUs for deep learning and >1PB of high-speed secure storage.

For more information about the posts and how to apply, please visit: https://jobs.surrey.ac.uk/026021 Deadline: Wednesday 16 June 2021 (23:00 GMT)

For informal inquiries about the position, please contact Prof Mark Plumbley (m.plumbley@surrey.ac.uk).

Read more

Postdoctoral fellow - Sound representation in complex environments

A postdoctoral research position is available at Johns Hopkins University in the laboratory of Dr. Mounya Elhilali to investigate representation of complex sounds in both biological and artificial networks. The position is available immediately for two years, with possibility of renewal.

The ideal applicant will have a doctoral degree in computer science, electrical engineering, applied mathematics, neuroscience, psychology, hearing or brain sciences, with strong quantitative skills.

Johns Hopkins is an outstanding intellectual environment for medical and engineering research. The laboratory is affiliated with the department of Electrical and computer engineering as well as the Center for Speech & Language Processing and the Center for Hearing and Balance, and has strong research collaborations with the departments of Biomedical Engineering, Psychology and Brain Sciences, Computer Science, Mechanical Engineering as well as the schools of Medicine and Public Health.

Interested applicants should send a brief cover letter, a curriculum vitae with sample publications, and 2 reference contacts to mounya(at)jhu(dot)edu.

Read more

Speech Research & Development Engineer

Digital Voice Systems, Inc. (DVSI) is seeking a qualified Speech Research & Development Engineer at our office in Westford, MA.  This is a great opportunity to join our team of world class engineers in designing high quality voice compression technology that is implemented in hundreds of millions of telecommunication systems world-wide.

The ideal candidate will play a key role in the research and development of DVSI's next generation of digital speech compression technology including speech analysis; speech modeling; model parameter estimation; quantization; speech synthesis; error correction and mitigation methods; as well as, echo cancellation and noise reduction.

Desired Qualifications

•  Research and development experience in speech or audio

•  Knowledge of programming languages, i.e. C/C++, Matlab etc.

•  PhD (or equivalent) in Electrical Engineering or Software Engineering with an emphasis in Signal Processing

•  U.S. Citizenship or Permanent Residency required

Compensation

•  Competitive salary

•  Benefits package

•  Excellent working environment

Company Background

Founded in 1988, Digital Voice Systems, Inc. (DVSI) is the world leader in the development of low data rate, high-quality speech compression products for use in digital mobile radio, satellite and other wireless communication systems. DVSI’s patented line of Multi-Band Excitation vocoders have been successfully implemented in a full range of private and standards-based digital communication systems worldwide.

Read more

Researchers in Speech, Text and Multimodal Machine Translation @ DFKI Saarbrücken, Germany

--------------------------------------------------------------
Researchers in Speech, Text and Multimodal Machine Translation at DFKI Saarbrücken, Germany
--------------------------------------------------------------

The MT group at ML at DFKI Saarbrücken is looking for

    senior researchers/researchers/junior researchers

in speech, text and multimodal machine translation using deep learning.

3 year contracts. Possibility of extension. Ideal starting dates around June/July 2021.

Key responsibilities:
- Research and development in speech, text and multimodal MT
- Scientific publications
- Co-supervision of BSc/MSc students and research assistants
- Possibility of teaching at Saarland University (UdS)
- Senior: PhD co-supervision
- Senior: Project/grant acquisition and management

Qualifications senior researchers/researchers:
- PhD in NLP/Speech/MT/ML/CS or related
- strong scientific and publication track record in speech/text/multimodal-NLP/MT

Qualifications junior researchers:
- MSc in CS/NLP/Speech/ML/MT or related (possibility to do a PhD at DFKI/UdS)

All:
- Strong background in machine learning and deep learning
- Strong problem solving and programming skills
- Strong communication skills in written and spoken English (German an asset, but not a requirement)

Working environment: the posts are in the “Multilinguality and Language Technology” MLT Lab at DFKI (the German Research Center for Artificial Intelligence https://www.dfki.de/en/web/) in Saarbrücken, Germany. MLT is led by Prof. Josef van Genabith. MLT is a highly international team and does basic and applied research.

Application: a short cover letter indicating which level (senior / researcher / junior) you apply for, a CV, a brief summary of research interests, and contact information for three references. Please submit your application by Friday April 23rd, 2021 as PDF to Prof. Josef van Genabith (josef.van_genabith@dfki.de) indicating your earliest possible start date.  Positions remain open until filled.

Selected MT at MLT group publications 2020/21: Xu et al. Probing Word Translation in the Transformer and Trading Decoder for Encoder Layers. NAACL-HLT 2021. Chowdhury et al. Understanding Translationese in Multi-View Embedding Spaces. COLING 2020. Pal et al. The Transference Architecture for Automatic Post-Editing. COLING 2020. Ruiter et al. Self-Induced Curriculum Learning in Self-Supervised Neural Machine Translation. EMNLP-2020. Zhang et al. Translation Quality Estimation by Jointly Learning to Score and Rank. EMNLP 2020. Xu et al. Dynamically Adjusting Transformer Batch Size by Monitoring Gradient Direction Change. ACL 2020. Xu et al. Learning Source Phrase Representations for Neural Machine Translation. ACL 2020. Xu et al. Lipschitz Constrained Parameter Initialization for Deep Transformers. ACL 2020. Herbig et al. MMPE: A Multi-Modal Interface for Post-Editing Machine Translation. ACL 2020. Herbig et al. MMPE: A Multi-Modal Interface using Handwriting, Touch Reordering and Speech Commands for Post-Editing Machine Translation. ACL 2020. Alabi et al. Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of Yorùbá and Twi. LREC 2020. Costa-jussàet al. Multilingual and Interlingual Semantic Representations for Natural Language Processing: A Brief Introduction. In: Computational Linguistics (CL) Special Issue: Multilingual and Interlingual Semantic Representations for Natural Language Processing. Xu et al. Efficient Context-Aware Neural Machine Translation with Layer-Wise Weighting and Input-Aware Gating. IJCAI 2020

DFKI is one of the leading AI centers worldwide, with several sites in Germany. DFKI Saarbrücken is part of the Saarland University (UdS) Informatics Campus. UdS has exceptionally strong CS and CL schools and, in addition to DFKI, a Max Plank Institute for Informatics, a Max Plank Institute for Software Systems, the Center for Bioinformatics, and the CISPA Helmholz Center for Information Security.

Geographic environment: Saarbrücken  (http://www.saarbruecken.de/en) is the capital of Saarland, one of the Federal States in Germany, located right in the heart of Europe and a cultural center in the border region of Germany, France and Luxembourg. Frankfurt and Paris are less than 2 hours by train. Living cost is moderate in comparison with other cities in Germany and Europe.

Read more

3 year Postdoc Position (E14) "Machine Learning for Speech and Audio Processing" at Universität Hamburg, Germany

The Signal Processing (SP) research group at the Universität Hamburg in Germany is hiring a Postdoc (E13/E14) "Machine Learning for Speech and Audio Processing".

The general focus of the Signal Processing (SP) research group is on developing novel signal processing and machine learning methods for speech and multimodal signals. Applications include speech communication devices such as hearing aids and voice-controlled assistants. The research associate will do research on novel signal processing and machine learning methods applied to speech and multimodal signals. Furthermore the research associate will help establishing degree programs in the data science context.

Please find the full job announcement with all details here
https://www.inf.uni-hamburg.de/en/inst/ab/sp/job-offer.html

-- Web: http://uhh.de/inf-sp YouTube: https://www.youtube.com/channel/UCsC4bz4A6mdkktO_eyCDraw

Read more

PhD Position

************ PhD position at Inria (Nancy - Grand Est), France **************

(More information: https://jobs.inria.fr/public/classic/en/offres/2021-03399)

Title: Robust and Generalizable Deep Learning-based Audio-visual Speech Enhancement

The PhD thesis will be jointly supervised by Mostafa Sadeghi (Inria Starting Faculty Position) and Romain Serizel (Associate Professor, Université de Lorraine) in the MULTISPEECH Team at Inria, Nancy - Grand Est, France.

Contacts: Mostafa Sadeghi (mostafa.sadeghi@inria.fr) and Romain Serizel (romain.serizel@loria.fr)

Context: Audio-visual speech enhancement (AVSE) refers to the task of improving the intelligibility and quality of a noisy speech utilizing the complementary information of visual modality (lips movements of the speaker) [1]. Visual modality can help distinguish target speech from background sounds especially in highly noisy environments. Recently, and due to the great success and progress of deep neural network (DNN) architectures, AVSE has been extensively revisited. Existing DNN-based AVSE methods are categorized into supervised and unsupervised approaches. In the former category, a DNN is trained to map noisy speech and the associated video frames of the speaker into a clean estimate of the target speech. The unsupervised methods [2] follow a traditional maximum likelihood-based approach combined with the expressive power of DNNs. Specifically, the prior distribution of clean speech is learned using deep generative models such as variational autoencoders (VAEs) and combined with a likelihood function based on, e.g., non-negative matrix factorization (NMF), to estimate the clean speech in a probabilistic way. As there is no training on noisy speech, this approach is unsupervised.

Supervised methods require deep networks, with millions of parameters, as well as a large audio-visual dataset with diverse enough noise instances to be robust against acoustic noise. There is also no systematic way to achieve robustness to visual noise, e.g., head movements, face occlusions, changing illumination conditions, etc. Unsupervised methods, on the other hand, show a better generalization performance and can achieve robustness to visual noise thanks to their probabilistic nature [3]. Nevertheless, their test phase involves a computationally demanding iterative process, hindering their practical use.

Objectives: Project description: In this PhD project, we are going to bridge the gap between supervised and unsupervised approaches, benefiting from both worlds. The central task of this project is to design and implement a unified AVSE framework having the following features: 1- Robustness to visual noise, 2- Good generalization to unseen noise environments, and 3- Computational efficiency at test time. To achieve the first objective, various techniques will be investigated, including probabilistic switching (gating) mechanisms [3], face frontalization [4], and data augmentation [5]. The main idea is to adaptively lower bound the performance by that of audio-only speech enhancement when the visual modality is not reliable. To accomplish the second objective, we will explore techniques such as acoustic scene classification combined with noise modeling inspired by unsupervised AVSE, in order to adaptively switch to different noise models during speech enhancement. Finally, concerning the third objective, lightweight inference methods, as well as efficient generative models, will be developed. We will work with the AVSpeech [6] and TCD-TIMIT [7] audio-visual speech corpora.

References:

[1] D. Michelsanti, Z. H. Tan, S. X. Zhang, Y. Xu, M. Yu, D. Yu, and J. Jensen, “An overview of deep-learning based audio-visual speech enhancement and separation,” arXiv: 2008.09586, 2020.

[2] M. Sadeghi, S. Leglaive, X. Alameda-Pineda, L. Girin, and R. Horaud, “Audio-visual speech enhancement using conditional variational auto-encoders,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 28, pp. 1788 –1800, 2020.

[3] M. Sadeghi and X. Alameda-Pineda, “Switching variational autoencoders for noise-agnostic audio-visual speech enhancement,” in ICASSP, 2021.

[4] Z. Kang, M. Sadeghi, R. Horaud, “Face Frontalization Based on Robustly Fitting a Deformable Shape Model to 3D  Landmarks,” arXiv: 2010.13676, 2020.

[5] S. Cheng, P. Ma, G. Tzimiropoulos, S. Petridis, A. Bulat, J. Shen, M. Pantic, “Towards Pose-invariant Lip Reading,”  in ICASSP, 2020.

[6] A. Ephrat, I. Mosseri, O. Lang, T. Dekel, K. Wilson, A. Hassidim, W.T. Freeman, M. Rubinstein, “Looking to Listen  at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation,” SIGGRAPH 2018.

[7] N. Harte and E. Gillen, “TCD-TIMIT: An Audio-Visual Corpus of Continuous Speech,” IEEE Transactions on Multimedia, vol.17, no.5, pp.603-615, May 2015.

Skills:

  • Master's degree, or equivalent, in the field of speech/audio processing, computer vision, machine learning, or in a related field,
  • Ability to work independently as well as in a team,
  • Solid programming skills (Python, PyTorch),
  • A decent level of written and spoken English.

Benefits package:

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
  • Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)
  • Social, cultural, and sports events and activities
  • Access to vocational training
  • Social security coverage

Remuneration:

Salary: 1982€ gross/month for 1st and 2nd year. 2085€ gross/month for 3rd year.

Monthly salary after taxes: around 1596,05€ for 1st and 2nd year. 1678,99€ for 3rd year. (medical insurance included).

Apply

Read more

Marie Skłodowska-Curie Early Stage Researcher (PhD position)

The Signal Processing Group (https://uol.de/en/mediphysics-acoustics/sigproc) of the Department of Medical Physics and Acoustics at the University of Oldenburg, Germany, is seeking to fill the position of a Marie Skłodowska-Curie Early Stage Researcher (m/f/d) in acoustical signal processing.

in the frame of the H2020 MSCA European Training Network SOUNDS. The full-time position is available from 01.07.2021 for 3 years, with a salary according to TV-L E13. The position is suitable for part-time employment for personal or family reasons.

The Early Stage Researcher (PhD student) will be embedded in the SOUNDS research and training network, and will carry out applied research in the interdisciplinary field of signal processing, room acoustics, auditory perception, communication networks and machine learning. The research will be executed in an international team of audio signal processing researchers and will involve several visits to internationally renowned research labs in Europe.

The SOUNDS European Training Network (ETN) revolves around a new and promising paradigm coined as Service-Oriented, Ubiquitous, Network-Driven Sound. Inspired by the ubiquity of mobile and wearable devices capable of capturing, processing, and reproducing sound, the SOUNDS ETN aims to bring audio technology to a new level by exploiting network-enabled cooperation between devices. We envision the next generation of audio devices to be capable of providing enhanced hearing assistance, creating immersive audio experience, enabling advanced voice control and much more, by seamlessly exchanging signals and parameter settings, and spatially analyzing and reproducing sound jointly with other nearby audio devices and infrastructure. It is anticipated that this paradigm will eventually result in an entirely new way of designing and using audio technology by considering audio as a service, enabled through shared infrastructure.

In the envisaged PhD project the main objective is to develop and evaluate signal processing algorithms for speech enhancement using acoustic sensor networks. Acoustic sensor networks consist of several spatially distributed microphones (e.g., smart speakers, headsets with external microphones) and provide a substantial advantage over traditional microphone arrays, since the probability that a subset of microphones is closer to the desired speech source(s) is substantially increased. More specifically, the PhD project will focus on combining model-based and machine-learning-based approaches for joint dereverberation and noise reduction, aiming at improving speech quality and intelligibility.

Responsibilities of the Early Stage Researcher

- carry out applied research on acoustical signal processing, involving algorithm design, implementation, and experimental validation;

- monitor the work plan of his/her individual research project and make sure that milestones are achieved and deliverables are finalized in a timely manner;

- actively participate in research meetings with the other SOUNDS ETN researchers;

- take part in the research meetings and seminars at the Department of Medical Physics and Acoustics;

- enroll in a doctoral training program at the Graduate School Science, Medicine and Technology of the University of Oldenburg.

Profile

- Candidates are required to have an academic university degree (Master or equivalent) in electrical engineering, computer science, engineering acoustics or a related discipline, excellent grades and a solid scientific background in at least two of the following fields: speech and audio signal processing, acoustics, machine learning. Candidates who are in the final phase of their Master studies are equally encouraged to apply, and should mention their expected graduation date.

- Candidates must satisfy the eligibility conditions for MSCA Early Stage Researchers, i.e., they must have obtained their Master degree in the past 4 years and must not have resided or carried out their main activity (work, studies, etc.) in Germany for more than 12 months in the past 3 years. Applications of candidates not fulfilling these eligibility conditions will not be considered.

- Familiarity with scientific tools and programming languages (e.g., Matlab, python) as well as excellent English language skills (both oral and written) are required.

- Experience with speech enhancement algorithms and machine-learning-based methods for audio processing is beneficial.

Offer

- A prestigious three-year MSCA Fellowship with a competitive salary.

- A strong involvement in a European research project with high international visibility.

- A high-level and exciting international research environment.

- A thorough scientific education in the frame of a doctoral training program.

- The possibility to participate in local as well as international courses, workshops and conferences.

- The possibility to perform research visits to internationally renowned research labs in Europe.

The Carl von Ossietzky Universität Oldenburg is dedicated to increasing the percentage of women in science. Therefore, equally qualified female candidates will be given preference. Applicants with disabilities will be preferentially considered in case of equal qualification.

To apply for this position please send your application (ref. SP211), including a letter of motivation with a statement of skills and research interests (max. 1 page), curriculum vitae, and a copy of the university diplomas and transcripts, to Carl von Ossietzky Universität Oldenburg, Fakultät VI, Abt. Signalverarbeitung, Prof. Dr. Simon Doclo, 26111 Oldenburg, Germany, or electronically to simon.doclo@uni-oldenburg.de. Application by email is preferred. The application deadline is 15.03.2021.

The SOUNDS ETN strongly values research integrity, actively supports open access and reproducible research, and strives for diversity and gender balance in its entire research and training program. The SOUNDS ETN adheres to The European Charter for Researchers and The Code of Conduct for the Recruitment of Researchers.

A privacy notice:

Please be informed that we will process personal data collected from you in response to this vacancy, such as your name, photo, address, email address, or personal data contained in your curriculum vitae, recommendation letter or other documents submitted by you in response to this vacancy for recruitment and selection and audit purposes.

You have the following rights in relation to our processing of your personal data:

  • Right of access: you can request access to your personal data, i.e. the right to get an overview of your personal data that we process.
  • Right to rectification: you can request correction of inaccurate data or completion of incomplete data.
  • Right to erasure: you have the right to ask us to erase your personal data in certain circumstances.
  • Right to restriction of processing: you have the right to ask us to restrict the processing of your personal data in certain circumstances.
  • Right to object: you have the right to object to the processing of your personal data in certain circumstances.

In case you object to the processing of your data necessary for recruitment and selection, please be aware that no contractual relation is possible and we will not be able to consider your application.

  • Right to data portability: you have the right to ask that we transfer the personal data you provided us to another organization, or to you, in certain circumstances.

Read more