Jump to main content Hotkeys
Distributed and Self-organizing Systems
Distributed and Self-organizing Systems
Seminar Web Engineering (WS 2022/2023)


Seminar Web Engineering (WS 2022/2023)

Welcome to the homepage of the Seminar Web Engineering

This website contains all important information about the seminar, including links to available topics as well as information about the seminar process in general.

The interdisciplinary research area Web Engineering develops approaches for the methodological construction of Web-based applications and distributed systems as well as their continuous development (evolution). For instance, Web Engineering deals with the development of interoperable Web Services, the implementation of web portals using service-oriented architectures (SOA), fully accessible user interfaces or even exotic web-based applications that are voice controlled via the telephone or that are represented on TV and Radio.

The following steps are necessary to complete the seminar:

  • Preparation of a presentation about the topic assigned to you.
  • An additional written report of your topic.
  • Each report is reviewed by two or three other particpants.

Seminar chairs

traubinger

siegert

gaedke


Contact

If you have any questions concerning this seminar or the exam as a participant, please contact us via OPAL.

We also offer a Feedback system, where you can provide anonymous feedback for a partiular session to the presenter on what you liked or where we can improve.

Participants

The seminar is offered for students of the following programmes (for pre-requisites, please refer to your study regulations):

If your programme is not listed here, please contact us prior to seminar registration and indicate your study programme, the version (year) of your study regulations (Prüfungsordnungsversion) and the module number (Modulnummer) to allow us to check whether we can offer the seminar for you and find an appropriate mapping.

Registration

You may only participate after registration in the Seminar Course in OPAL

The registration opens on 14.10.2022 at 12:00 and ends on 21.10.2022 at 23:59. As the available slots are usually rather quickly booked, we recommend to complete your registration early after registration opens.

Topics and Advisors

Questions:

  • How can you include Forgiveness and Regret in a Content Trust Model?
  • Why would these concepts enhance the Content Trust model?

Questions:

  • What are GOMS/KLM Models? How do they work? Why are they used? What is the (data) basis on which they were created?
  • For what kinds of user interfaces can they be or have they been applied? What are their limitations?
  • Apply GOMS modeling to real world examples (e.g. vsr website) and demonstrate how they can be used to improve these interfaces.

Literature:

  • https://cogulator.io/
  • https://syntagm.co.uk/design/klmcalc.shtml
  • https://www.cogtool.org/
  • Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, N.J. : L. Erlbaum Associates.
  • Kieras, D. (1997). A guide to GOMS model usability evaluation using NGOMSL (Chapter 31). In M. Helander, T.K. Landauer & P.V. Prabhu (Eds.), Handbook of Human-Computer Interaction. Amsterdam: North-Holland Elsevier Science Publishers. Kim,
  • John, B. and Kieras, D. The GOMS family of user interface analysis techniques: comparison and contrast. ACM TOCHI, 3 (4). 1996. 320-351.
  • John, B. E. (2010). CogTool: Predictive human performance modeling by demonstration. 19th Annual Conference on Behavior Representation in Modeling and Simulation 2010, BRiMS 2010, 308–309.

Questions:

  • What are replication studies? Why are replication studies important? To what situation does the term "replication crisis" refer to and in which fields within computer science research has it been applied?
  • Find existing replication studies in Web Engineering and Software Engineering. What is replicated in them and how? Are there differences to replication studies in other fields (e.g. psychology, biology)?
  • Did the replication studies confirm the initial results? What were the problems?

Literature:

  • Cockburn, A., Dragicevic, P., Besançon, L., & Gutwin, C. (2020). Threats of a replication crisis in empirical computer science. Communications of the ACM, 63(8), 70–79. https://doi.org/10.1145/3360311
  • Echtler, F., & Häußler, M. (2018). Open Source, Open Science, and the Replication Crisis in HCI. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, 2018-April, 1–8. https://doi.org/10.1145/3170427.3188395
  • Shepperd, M. (2018). Replication studies considered harmful. Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, 73–76. https://doi.org/10.1145/3183399.3183423
  • Gómez, O. S., Juristo, N., & Vegas, S. (2014). Understanding replication of experiments in software engineering: A classification. Information and Software Technology, 56(8), 1033-1048.
  • Da Silva, F. Q., Suassuna, M., França, A. C. C., Grubb, A. M., Gouveia, T. B., Monteiro, C. V., & dos Santos, I. E. (2014). Replication of empirical studies in software engineering research: a systematic mapping study. Empirical Software Engineering, 19(3), 501-557.
  • Shepperd, M., Ajienka, N., & Counsell, S. (2018). The role and value of replication in empirical software engineering results. Information and Software Technology, 99, 120-132.

Questions:

  • How does a Systematic Literature Review work? Prepare a guideline for computer science students explaining the main aspects and include a list of relevant publications search engines/catalogues.
  • What does "systematic" mean in SLR, how is it different from other literature review methods? How does it compare to a Structured Literature Review? How does it compare to a Systematic Mapping Studies? What are risks and limitations of the method?
  • How are research questions represented/quantified? What does coding mean in this context?
  • How are search queries constructed? Explain the technique of query expansion for generating additional queries.
  • Which SLR artifacts should be provided to allow for reproducibility and replicability?
  • What tools exist to support SLRs? Demonstrate a suitable tool.

Literature:

  • Kitchenham, B. (2004). Procedures for Undertaking Systematic Reviews. https://www.inf.ufsc.br/~aldo.vw/kitchenham.pdf
  • Kitchenham, B., Pearl Brereton, O., Budgen, D., Turner, M., Bailey, J., & Linkman, S. (2009). Systematic literature reviews in software engineering - A systematic literature review. Information and Software Technology, 51(1), 7–15.
  • Brereton, P., Kitchenham, B. a., Budgen, D., Turner, M., & Khalil, M. (2007). Lessons from applying the systematic literature review process within the software engineering domain. Journal of Systems and Software, 80(4), 571–583.
  • Petersen, K., Vakkalanka, S., & Kuzniarz, L. (2015). Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology, 64, 1–18.
  • Díaz, O., Medina, H., & Anfurrutia, F. I. (2019). Coding-Data Portability in Systematic Literature Reviews. Proceedings of the Evaluation and Assessment on Software Engineering - EASE ’19, 178–187.
  • Khadka, R., Saeidi, A. M., Idu, A., Hage, J., & Jansen, S. (2013). Legacy to SOA Evolution: A Systematic Literature Review. In A. D. Ionita, M. Litoiu, & G. Lewis (Eds.), Migrating Legacy Applications: Challenges in Service Oriented Architecture and Cloud Computing Environments (pp. 40–71). IGI Global.
  • Jamshidi, P., Ahmad, A., & Pahl, C. (2013). Cloud Migration Research: A Systematic Review. IEEE Transactions on Cloud Computing, 1(2), 142–157.
  • Rai, R., Sahoo, G., & Mehfuz, S. (2015). Exploring the factors influencing the cloud computing adoption: a systematic study on cloud migration. SpringerPlus, 4(1), 197.
  • A. Hinderks, F. José, D. Mayo, J. Thomaschewski and M. J. Escalona, "An SLR-Tool: Search Process in Practice : A tool to conduct and manage Systematic Literature Review (SLR)," 2020 IEEE/ACM 42nd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), 2020, pp. 81-84.
  • PRISMA 2020 http://www.prisma-statement.org/

Questions:

  • What is Electron?
  • How does it work?
  • What are limitations?
  • Who uses Electron?

Literature:

Questions:

  • What are existing best practices/guidelines etc. for scientific writing, particularly for writing bachelor/master theses, and especially in the fields of web engineering, software engineering, HCI, information systems.
  • How can texts be structured according to the "Pyramid Principle"? How to apply it to the argumentation structure of theses? How can it be combined with the SCQA scheme?
  • How does the CCC (Context-Content-Conclusion) Scheme work? What are the conflicts with the Pyramid Principle? How can they be combined in the same document?
  • What to consider when writing a thesis with regard to the use of tenses, wordings and consistency, abbreviations, subjunctive, wideness of claims, precision of claims, colloquial expressions, language complexity, and overall writing style?
  • For all best practices/guidelines/schemes/principles/advices etc., what it the current evidence base (e.g. experimental studies) supporting their effectiveness?
  • Demo Idea (can be discussed/modified): Prepare interactive self-learning materials (e.g. a quiz) for the best practices/guidelines/schemes/principles/advices etc. targeting bachelor/master students who are starting to write their theses.

Literature:

  • B. Minto, The Pyramid Principle: Logic in Writing and Thinking. London: Pitman, 1995.
  • B. Mensh and K. Kording, “Ten simple rules for structuring papers,” PLOS Comput. Biol., vol. 13, no. 9, p. e1005619, Sep. 2017, doi: 10.1371/journal.pcbi.1005619.

Questions:

  • What is empirical Software Engineering Evaluation and how can it be done?
  • Why is evaluation important in the research process?
  • What is the difference between a qualitative and quantitative evaluation? (When do you use which one? What are advantages and disadvantages?)
  • Prepare a list of evaluation methods and tools that can be used to evaluate software. Explain them and add relevant literature for these methods.
  • Demonstrate one quantitative and one qualitative method. For this, find a feasible research question, conduct a survey on it with each of the two methods, compute the results, discuss and present them. You can choose a low level topic on your own that is related to Web Engineering. Use statistical methods to compute the results.

Literature:

  • Own research
  • Creswell, J. W. (2014). Research design : Qualitative, quantitative, and mixed methods approaches (4. ed., in). SAGE. https://katalog.bibliothek.tu-chemnitz.de/Record/0008891954
  • Wohlin, C., Runeson, P., Höst, M., Ohlsson, M. C., Regnell, B., & Wesslén, A. (2012). Experimentation in Software Engineering. In Experimentation in Software Engineering (Vol. 9783642290). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-29044-2
  • Chatzigeorgiou, A., Chaikalis, T., Paschalidou, G., Vesyropoulos, N., Georgiadis, C. K., & Stiakakis, E. (2015). A Taxonomy of Evaluation Approaches in Software Engineering. Proceedings of the 7th Balkan Conference on Informatics Conference - BCI ’15, 1–8. https://doi.org/10.1145/2801081.2801084
  • Wainer, J., Novoa Barsottini, C. G., Lacerda, D., & Magalhães de Marco, L. R. (2009). Empirical evaluation in Computer Science research published by ACM. Information and Software Technology, 51(6), 1081–1085. https://doi.org/10.1016/j.infsof.2009.01.002

Questions:

  • What is Routing? What are differences between indoor and outdoor navigation?
  • Which information are important? Consider information from the building, but also from the surrounding area. (Use OpenStreetMap(OSM) tags as a data basis for this, including accessibility information.) Is there other information in or about the building that is relevant? Give an overview over all of this data.
  • How can it be stored so that the information can be accessed easily? What concepts and frameworks are used and how do they work? Give on overview over this and show examples.
  • Can these concepts and frameworks be optimized? How? Build one on you own and present it.

Literature:

  • Own research
  • https://www.openstreetmap.org
  • https://wiki.openstreetmap.org/wiki/Main_Page
  • R. Tscharn, T. Außenhofer, D. Reisler, und J. Hurtienne, „“Turn Left After the Heater”: Landmark Navigation for Visually Impaired Users“, S. 2, doi: 10.1145/2982142.2982195.
  • V. Traubinger, L. Franzkowiak, N. Tauchmann, M. Costantino, J. Richter, und M. Gaedke, „The Right Data at the Right Moment for the Right Person — User Requirements and Their Implications for the Design of Indoor Navigation Systems“, in 2021 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Lloret de Mar, Spain, Nov. 2021, S. 1–8. doi: 10.1109/IPIN51156.2021.9662570.
  • N. Fallah, I. Apostolopoulos, K. Bekris, und E. Folmer, „Indoor Human Navigation Systems: A Survey“, Interacting with Computers, Bd. 25, Nr. 1, S. 21–33, Jan. 2013, doi: 10.1093/iwc/iws010.
  • C. Bauer, M. Müller, und B. Ludwig, „Indoor pedestrian navigation systems: is more than one landmark needed for efficient self-localization?“, in Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia - MUM ’16, Rovaniemi, Finland, 2016, S. 75–79. doi: 10.1145/3012709.3012728.
  • A. Miyake, M. Hirao, M. Goto, C. Takayama, M. Watanabe, und H. Minami, „A Navigation Method for Visually Impaired People: Easy to Imagine the Structure of the Stairs“, in The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, Virtual Event Greece, Okt. 2020, S. 1–4. doi: 10.1145/3373625.3418002.
  • M. Gupta u. a., „Towards More Universal Wayfinding Technologies: Navigation Preferences Across Disabilities“, in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu HI USA, Apr. 2020, S. 1–13. doi: 10.1145/3313831.3376581.
  • H. Nicolau, J. Jorge, und T. Guerreiro, „Blobby: how to guide a blind person“, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems - CHI EA ’09, Boston, MA, USA, 2009, S. 3601. doi: 10.1145/1520340.1520541.
  • K. L. Lovelace, M. Hegarty, und D. R. Montello, „Elements of Good Route Directions in Familiar and Unfamiliar Environments“, in Spatial Information Theory. Cognitive and Computational Foundations of Geographic Information Science, Bd. 1661, C. Freksa und D. M. Mark, Hrsg. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999, S. 65–82. doi: 10.1007/3-540-48384-5_5.

Questions:

  • What is design science research? What are the objectives of design science research?
  • Which are activities? How research is conducted? How results are evaluated?
  • In which research areas of computer science this methodology is more practical?
  • Using design science research produce a viable and simplified artifact in the form of a construct, a model, a method and demonstrate activities

Literature:

  • Own research
  • Johannesson Perjons (2021), An Introduction to Design Science, https://link.springer.com/book/10.1007/978-3-030-78132-3

Questions:

  • What is signposting?
  • How can signposting be applied to improve machine-readbility of resources on the Web?

Questions:

  • How is a scientific work, especially a thesis in computer science, structured? What sections should a thesis contain and what purpose do they have? Give an overview.
  • What is the importance of an introduction? What should it contain? How long should it be? How is it related to the other sections in a scientific work?
  • What makes a "good" motivation for your scientific work? Why is it important for the readers? What are methods to write it so the reader can relate to the writer?
  • What is the scope of a scientific work? Why is it important? How should you include the scope in the introduction?
  • What are current and well known best practices/guidelines/schemes/principles/advices etc.? Which evidence base (e.g. experimental studies) are supporting them? Present them.
  • Choose a sufficient scientific work and work out a way to visually represent its whole structure. Show how the introduction, motivation and scope relate to the other parts.

Literature:

  • Own research
  • Peat, J., Elliott, E., Baur, L., & Keena, V. (2013). Scientific writing: easy when you know how. John Wiley & Sons. DOI:10.1002/9781118708019
  • X Barbara Minto: The Pyramid Principle. Pearson Education, 2009.
  • Mensh, B., & Kording, K. (2017). Ten simple rules for structuring papers. PLoS computational biology, 13(9), e1005619. DOI: https://doi.org/10.1371/journal.pcbi.1005619
  • J. M. Setchell, “Writing a Scientific Report,” in Studying Primates: How to Design, Conduct and Report Primatological Research, Cambridge: Cambridge University Press, 2019, pp. 271–298.
  • Blackwell, J., & Martin, J. (2011). A scientific approach to scientific writing. Springer Science & Business Media.
  • Williams, J. M., & Bizup, J. (2014). Lessons in clarity and grace. Pearson.
  • Oguduvwe, J. I. P. (2013). Nature, Scope and Role of Research Proposal in Scientific Investigations. IOSR Journal Of Humanities And Social Science (IOSR-JHSS), 17(2), 83-87. https://www.iosrjournals.org/iosr-jhss/papers/Vol17-issue2/L01728387.pdf

Questions:

  • Which meta-studies exist on the topic? Identify suitable publications (SLRs, Systematic Mapping studies, Literature Reviews) relevant for AI in HCI Design and Evaluation.
  • What is the state of research on AI in HCI design and evaluation? Provide an overview about the use of AI methods for HCI design and evaluation in the last 5 years by performing a Systematic Mapping study.
  • Which approaches exist and how can they be grouped/classified? In which sources/years were approaches published and is there a trend over the past years? What kind of evidence is provided for the approaches? Which AI-techniques are used? What is the input? What is the output?

Questions:

  • What is the current state of voice user interface research? Identify relevant publication venues, classify/group existing published approaches and identify research directions for future research as well as tools and platforms that support the creation of voice user interfaces.
  • Which approaches specifically address the automatic assessment, evaluation and testing of voice user interfaces or voice interactions? Which methods do they use? What kinds of inputs do they require? Which results/predictions/assessments do they produce?
  • What are quality and performance metrics for voice user interfaces? Identify existing measurement strategies and metrics that allow the evaluation and comparison of voice user interfaces.

Questions:

  • Provide an introduction to PyTorch. What features are supported? Which models can be created and which state-of-the-art models are available. Which tools and libraries are commonly used with PyTorch? Provide specific recommendations for different usage scenarios.
  • How does PyTorch compare to tensorflow (in 2022)? When to use which? Review existing structured comparisons and synthesize your own decision tree to support framework selection.
  • Demonstrate the use of PyTorch by training a suitable state-of-the-art regression model. Showcase different levels of abstraction (potentially applying suitable frameworks/tools in combination or on top of PyTorch) when defining your own model architecture. Compare the demo model with the corresponding implementation in tensorflow.

Questions:

  • Provide an overview of Automatic Fake News Detection approaches. How are Fake News defined and which criteria exist for their classification (in general, not for automatic classifiers)? Which methods/models are commonly used? What are performance metrics to assess model quality and what are the current quality levels achieved? What are current research directions/trends for future research?
  • Which datasets are available? What are their sizes? How have they been created? What do they contain? Which classification schemes exist beyond the binary fake/non-fake classification?
  • Which approaches take additional data apart from the news text into account? How is data from other data sources integrated into the model inputs and/or the results?
  • Which languages are covered by current models and datasets? Which approaches address aspects of translation, transfer learning of models form other languages, language-independent meta-models etc.?
  • Demonstrate automatic fake news detection by training a state of the art model on an existing dataset and applying it to current news items (not older than 4 weeks) that are not contained in the dataset.

Questions:

  • What is the current state of research in quantum web services? Which groups are doing research in this field? What are the current research challenges? Explain the mis-match between the computing, mathematics and physics on the one hand side and the software engineering perspective on the other side.
  • Can web services in general benefit from quantum computing, or is it rather applicable to specific problems? Which kinds of web services could benefit the most from quantum capabilities?
  • What are current capabilities of quantum computers? Which features are currently not possible especially with regard to web services?
  • What are current practical problems assuming you have a software solution that could benefit from quantum computing and want to provide it as quantum web service? How do the resulting hybrid software architectures of quantum web services generally look like?
  • What are relevant langauges for modeling and tools/platforms for developing, testing and hosting quantum computing based software? How do they compare? Prepare a demo with a suitable tool/platform.

Questions:

  • What are micro frontends? How do they relate to microservices? What are advantages and disadvantages of developing applications with micro frontends? Provide guidance on when to use them and when not to.
  • What is the current state of research into micro frontends? Which groups are doing research in that area? What are main research directions? In which venues are researchers publishing about micro frotnends?
  • Which tools/platforms/frameworks exist to support the development of micro frontends? Prepare a suitable demo application showcasing relevant technological infrastructure.
  • Which design patterns exist wrt. to micro frontends? How do they relate to patterns for microservices?
  • Which empirical studies/case studies investigate the impact of using micro frontends on development? What are the main findings?

Questions:

  • What is Causal AI? How does it compare to "traditional" AI? What are advantages and disadvantages?
  • Which models and methods are used? Which tools and frameworks exist to support development/training/usage of Causal AI?
  • What are usage scenarios for Causal AI? Report on successful cases of its application.
  • What is the state of research on Causal AI? What are current research directions? Which are the major publication venues and research groups in this area?

Questions:

  • What is anonymization, what is pseudonymization and how do they differ?
  • Which types of data need to be anonymized/pseudonymized?
  • Which anonymization and pseudonymization methods exist?

Literature:

  • Gruschka, N. et al.: Privacy Issues and Data Protection in Big Data: A Case Study Analysis under GDPR. In Proceedings - 2018 IEEE International Conference on Big Data, Big Data 2018, 2019; S. 5027–5033.
  • Clifton, C.; Tassa, T.: On syntactic anonymity and differential privacy. In Transactions on Data Privacy, 2013, 6; S. 161–183.
  • Machanavajjhala, A. et al.: l-Diversity: Privacy Beyond. In Discovery, 2007, 1; S. 146.
  • Ninghui, L.; Tiancheng, L.; Venkatasubramanian, S.: t-Closeness: Privacy beyond k-anonymity and ℓ-diversity: Proceedings - International Conference on Data Engineering, 2007; S. 106–115.
  • Own research

Questions:

  • What guidelines and principles exist to safeguard Good Research Practice?
  • How can these guidelines and principles be integrated into the research process?
  • What is considered "high-quality research"? What are indicators thereof?

Literature:

  • Guidelines for Safeguarding Good Research Practice https://www.dfg.de/download/pdf/foerderung/rechtliche_rahmenbedingungen/gute_wissenschaftliche_praxis/kodex_gwp_en.pdf
  • The European Code of Conduct for Research Integrity http://www.allea.org/wp-content/uploads/2017/03/ALLEA-European-Code-of-Conduct-for-Research-Integrity-2017-1.pdf
  • Open Research Data and Data Management Plans https://erc.europa.eu/sites/default/files/document/file/ERC_info_document-Open_Research_Data_and_Data_Management_Plans.pdf
  • Own research

Questions:

  • What are the reasons for sharing research data? What types of research data should be shared?
  • How do primary and secondary data differ?
  • What measures need to be taken in advance of collecting personal data?
  • What legal and ethical aspects need to be considered when sharing and publishing research data?

Literature:

  • Harrower, Natalie, Maryl, Maciej, Biro, Timea, Immenhauser, Beat, & ALLEA Working Group E-Humanities. (2020) Sustainable and FAIR Data Sharing in the Humanities: Recommendations of the ALLEA Working Group E-Humanities, Digital Repository of Ireland [Distributor], Digital Repository of Ireland [Depositing Institution], https://doi.org/10.7486/DRI.tq582c863
  • ALLEA (European Federation of Academies of Sciences and Humanities), FEAM (Federation of European Academies of Medicine), & EASAC (European Academies’ Science Advisory Council). (2021). International Sharing of Personal Health Data for Research. ALLEA. https://doi.org/10.26356/IHDT
  • Own research

Questions:

  • Why is it important to work together with users in the design process? Why is it important to have a diverse and comprehensive base of users for this process? Give positive and negative examples, where users were (not) involved in the design process.
  • Explain the following terms and explain the differences: Human-Centered Design, Universal Design and Participatory Design. What are differences to Citizen Science and crowdsourced data?
  • What are methods and frameworks for collaborative design processes? Which methods can be used for a scientific evaluation of the design? What are pitfalls in using these frameworks? Choose 5 methods and compare them.
  • Choose a design that you want to work on collaboratively with users (App, programm, User Interface, etc.). This should be a topic where you are not in the user group. Make a plan of what you want to do and how you would design it. Then use one of the methods above and apply it wiht one or two people. Document the process, your interaction with the users, what went good and what went bad. Evaluate the results you got in this interaction and compare them with your own plans, designs and assumptions. Reflect on this whole process and present your findings.

Literature:

  • Nicolai Brodersen Hansen, Christian Dindler, Kim Halskov, Ole Sejer Iversen, Claus Bossen, Ditte Amund Basballe, and Ben Schouten. 2020. How Participatory Design Works: Mechanisms and Effects. In Proceedings of the 31st Australian Conference on Human-Computer-Interaction (OZCHI'19). Association for Computing Machinery, New York, NY, USA, 30–41. https://doi.org/10.1145/3369457.3369460
  • Winograd, T., & Woods, D. D. (1997). The challenge of human-centered design. Human-centered systems: information, interactivity, and intelligence.
  • Maguire, M. (2001). Methods to support human-centred design. International journal of human-computer studies, 55(4), 587-634.
  • Story, M. F. (2001). Principles of universal design. Universal design handbook.
  • Mucha, H., Correia de Barros, A., Benjamin, J., Benzmüller, C., Bischof, A., Buchmüller, S., de Carvalho, A., Dhungel, A., Draude, C., Fleck, M., Jarke, J., Klein, S., Kortekaas, C., Kurze, A., Linke, D., Maas, F., Marsden, N., Melo, R., Michel, S., Müller-Birn, C., Pröbster, M., Rießenberger, K., Schäfer, M., Sörries, P., Stilke, J., Volkmann, T., Weibert, A., Weinhold, W., Wolf, S., Zorn, I., Heidt, M. & Berger, A. (2022). Collaborative Speculations on Future Themes for Participatory Design in Germany. i-com, 21(2), 283-298. https://doi.org/10.1515/icom-2021-0030
  • Wiggins, A., & Crowston, K. (2011, January). From conservation to crowdsourcing: A typology of citizen science. In 2011 44th Hawaii international conference on system sciences (pp. 1-10). IEEE.
  • https://www.userinterviews.com/blog/design-failure-examples-caused-by-bias-noninclusive-ux-research
  • Own research

Questions:

  • Conduct research on alternative User Interfaces to classical GUIs (graphical user interfaces)? Relate this to typically used mobile devices (smartphones, tablets, laptops, ...). Include gadgets and devices that can be connected (analog or digital) with these mobile devices.
  • List characteristics from these UIs to show the differences in sensory output, which sensors and actuators are needed in the device, if these UIs are accessible, what disadvantages they have, etc.
  • Which regulations and guidelines exist for providing multimodal interactions? Why is this important? Are there guidelines, regulations or best practices for designing an inclusive and accessible User Interface? What are problems with the current User Interfaces in mobile devices?
  • For the demonstration, use the sensors and actuators in your smartphone and programm a simple app that shows different possibilities for User Interfaces.

Literature:

  • Own research
  • O. Ozioko, P. Karipoth, M. Hersh and R. Dahiya, "Wearable Assistive Tactile Communication Interface Based on Integrated Touch Sensors and Actuators," in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 6, pp. 1344-1352, June 2020, doi: 10.1109/TNSRE.2020.2986222.
  • Tachiquin R, Velázquez R, Del-Valle-Soto C, Gutiérrez CA, Carrasco M, De Fazio R, Trujillo-León A, Visconti P, Vidal-Verdú F. Wearable Urban Mobility Assistive Device for Visually Impaired Pedestrians Using a Smartphone and a Tactile-Foot Interface. Sensors. 2021; 21(16):5274. https://doi.org/10.3390/s21165274
  • Khan, A., Khusro, S. Blind-friendly user interfaces – a pilot study on improving the accessibility of touchscreen interfaces. Multimed Tools Appl 78, 17495–17519 (2019). https://doi.org/10.1007/s11042-018-7094-y
  • Nathan Magrofuoco, Paolo Roselli, and Jean Vanderdonckt. 2021. Two-dimensional Stroke Gesture Recognition: A Survey. ACM Comput. Surv. 54, 7, Article 155 (September 2022), 36 pages. https://doi.org/10.1145/3465400
  • Razan Jaber and Donald McMillan. 2020. Conversational User Interfaces on Mobile Devices: Survey. In Proceedings of the 2nd Conference on Conversational User Interfaces (CUI '20). Association for Computing Machinery, New York, NY, USA, Article 10, 1–11. https://doi.org/10.1145/3405755.3406130

Questions:

  • What is Hydration?
  • What is the alternative approach of the Qwik Framework in regards to Hydration?
  • How does this solution differ from Hydration?

Questions:

  • What are the different categories of approaches? Provide state-of-the-art solutions
  • How do the existing approaches compare?
  • Which technique is the most suitable for text that are short in the length?
  • What approaches exist to further extract features from the summarized text?

Questions:

  • What are mashups?
  • What are the different techniques for creating a Voice-based/Chat-bot based Web-based Mashup?
  • How can an existing mashup be modified with natural interactions?

Questions:

  • Why is explainable planning important in the context of end users?
  • What does it mean for the behaviour of a AI planner to be explainable?
  • What are the challenges and solutions?

Questions:

  • What are common libraries to use in the web to visualize directed (and weighted) graphs?
  • What are effictive ways to explore such graphs as a user?
  • Which libraries are suitable for big data or how well do the libraries scale?

Questions:

  • What are common techniques to display and explore non-relational big data within the web browser?
  • How to make non-relational big data adaptable via the web browser?

Literature:

  • own research

Questions:

  • What is MLOps? How does it relate to DevOps and CI/CD? Which patterns/best practices/principles exist? Which phases of ML lifecycle (e.g. dataset creation, feature engineering, training, predictions/usage, re-training/model updates) are supported and in which ways?
  • Which methods/techniques form DevOps does it apply? What are characteristics specific to ML approaches that require a different approach?
  • Which tools and platforms exist to support MLOps? Prepare a demonstration using suitable tools for an MLOps scenario.

Questions:

  • Analyze provided/existing datasets of UI Object Detection for spatial characteristics. Create a queryable geo index of bounding boxes.
  • Analyze the 1-dimensional distributions, e.g. for x, y, CoG, width, height, area, distance to nearest neighbor (x, y and geometrical)? Create heatmap visualizations of the bounding boxes (left corner, right corner, CoG, full area).
  • Analyze alignments/alignment graphs. Are there regularities/patterns?
  • Are spatial aspects characteristic? Are there differences across classes? Can object types be predicted from spatial data? Are there differences compared to random/low-quality datasets?

Literature:

  • Heil, S., Bakaev, M., & Gaedke, M. (2021). Web User Interface as a Message: Power Law for Fraud Detection in Crowdsourced Labeling. In M. Brambilla, R. Chbeir, F. Frasincar, & I. Manolescu (Eds.), Web Engineering (Vol. 12706, pp. 88–96). Springer International Publishing. https://doi.org/10.1007/978-3-030-74296-6_7

Questions:

  • What does the Term Open Science mean? How many scientific works are published in this way?
  • Look at the following terms: Open Access, Open Data and Open Source? Give an overview over each topic, what is different to the traditional publishing process and what is important for each step.
  • What is the difference to the FAIR principles? What are differences to publishing works on researchgate, arxiv, etc.?
  • How does the review process work in Open Access Publishing? What are funding possibilities?
  • For the demonstration, show which platforms are available for each aspect from TU Chemnitz for publishing a paper and where you may need to use another platform. Also give a brief overview over similar platforms and tools.

Questions:

  • What is the difference between citation, bibliography and bibliometrics? What is their importance for scientific works?
  • Explain and differentiate at least 5 common citation styles for computer science (IEEE, APA, ACM, ...). Show this with examples.
  • What are rules and expectations for a bibliography?
  • Which tools/programms can be used for citations and bibliography while writing a paper?
  • What metrics can be used for bibliography? What are its differences? Where are their limits? Show these on examples.

Literature:

Questions:

  • What are common access control models?
  • Are they applicable to Linked Data?
  • What are access control models specifically for Linked Data?
  • What are their advantages or disadvantages? How do they compare to other access control models?

Literature:

  • Kirrane, Sabrina. (2015). Linked Data with Access Control.

Questions:

  • Which challenges exist in querying large-scale RDF data?
  • What does HDT aim to improve and how does it work?
  • How can HDT be be applied?

Literature:

  • Martínez-Prieto, Miguel A. & Arias, Mario & Fernández, Javier. (2012). Exchange and Consumption of Huge RDF Data. 7295. 437-452. 10.1007/978-3-642-30284-8_36.
  • Fernández, Javier & Martínez-Prieto, Miguel A. & Gutierrez, Claudio & Polleres, Axel & Arias, Mario. (2013). Binary RDF Representation for Publication and Exchange (HDT). Journal of Web Semantics. 19. 22-41. 10.1016/j.websem.2013.01.002.

Questions:

  • What does it require to migrate the trust model REGRET getting used by web applications?
  • What has to be adapted within REGRET to do such a migration?
  • How well is the scalability and its accuracy compared to ConTED?

Questions:

  • What does it require to migrate the trust model EigenTrust getting used by web applications?
  • What has to be adapted within EigenTrust to do such a migration?
  • How well is the scalability and its accuracy compared to ConTED?

Questions:

  • Which technologies, frameworks, platforms and architectures are currently used by major web and mobile applications (at least 100)?
  • To answer the question, conduct a survey gathering publicly available information from web sources and identification through tools/analysis of HTTP headers etc.
  • Per each application, identify the basic architecture with the main top-level components (e.g. Mobile App, API Gateway, Loadbalancer, Database, Indexer, etc.). For these components, identify the technologies, frameworks and platforms used (e.g. programming language, frameworks for application logic and presentation, DBMS, web server, cloud platform etc.) and for each of these datapoints keep track of the source of information (e.g. the URL of the corresponding web resouce, the endpoint and tool used for analysis, the binary artifact analyzed etc.)
  • Further analyze the survey results to identify groups of similar technology stacks, patterns, frequencies of technologies per each component type, common architectures etc.

Literature:

  • Survey at least 100 major web and mobile applications, below are some examples
  • Search Engines: Google Search, DuckDuckGo, Yahoo Search
  • E-Commerce: Ebay, Amazon, zalando, Alibaba, rakuten
  • Social Media (web apps): Facebook, Twitter, Youtube
  • Social Media (mobile apps): WhatsApp, TikTok, Snapchat, Telegram, Twitter, Facebook
  • Super Apps: WeChat, Grab, Omni, Alipay

Seminar Opening

The date and time of the seminar opening meeting will be announced via OPAL.

Short Presentation

The date and time of the short presentations will be announced via OPAL.

In your short presentation, you will provide a brief overview on your selected topic.

This includes the following aspects:

  1. What is in your topic?
  2. Which literature sources did you research so far?
  3. What is your idea for a demonstration?

Following your short presentations, the advisors will provide you with feedback and hints for your full presentations.

Hints for your Presentation

  • As a rule of thumb, you should plan 2 minutes per slide. A significantly higher number of slides per minute exceeds the perceptive capacity of your audience.
  • Prior to your presentation, you should consider the following points: What is the main message of my presentaion? What should the listeners take away?
    Your presentation should be created based on these considerations.
  • The following site provides many good hints: http://www.garrreynolds.com/preso-tips/

Seminar Days

The date and time of the seminar opening meeting will be announced via OPAL.

Report

  • Important hints on citing:
    • Any statement which does not originate from the author has to be provided with a reference to the original source.
    • "When to Cite Sources" - a very good overview by the Princeton University
    • Examples for correct citation can be found in the IEEE-citation reference
    • Web resources are cited with author, title and date including URL and Request date. For example:
      • [...] M. Nottingham and R. Sayre. (2005). The Atom Syndication Format - Request for Comments: 4287 [Online]. Available: http://www.ietf.org/rfc/rfc4287.txt (18.02.2008).
      • [...] Microsoft. (2015). Microsoft Azure Homepage [Online]. Available: http://azure.microsoft.com/ (23.09.2015).
      • A url should be a hyperlink, if it is technically possible. (clickable)
  • Further important hints for the submission of your written report:
    • Use apart from justifiable exceptions (for instance highlight of text using <strong>...</strong>) only HTML elements which occur in the template. The CSS file provides may not be changed.
    • Before submitting your work, carefully check spelling and grammar, preferably with software support, for example with the spell checker of Microsoft Word.
    • Make sure that your HTML5 source code has no errors. To check your HTML5 source code, use the online validator of W3.org
    • For submission compress all necessary files (HTML, CSS, images) using a ZIP or TAR.GZ.

Review

  • Each seminar participant has to review exactly three reports. The reviews are not anonymous.
  • Use the review forms provided in the VSR Seminar Workflow, one per report.
  • Following the review phase, each seminar participant will receive the three peer reviews of his or her report and, if necessary, additional comments by the advisors. You will then have one more week to improve your report according to the received feedback.
  • The seminar grade will consider the final report.
    All comments in the reviews are for improving the text and therefore in the interest of the author.

Press Articles