Distributed Transaction management SystemsTopic 1: Big data storage allocation in Cloud computingThe challenge to efficiently archive and manage data is intensifying with the enormous growth of data. The demand for big data storage and management has become a challenge in today's industry.

There are multiple types of information and the number of locations stored on the Cloud.

Especially, an increasing number of enterprises employ distributed storage systems for storage, management and sharing huge critical business information on the cloud If you are encountering similar problems in writing your Information Technology dissertation and do not wish to make a compromise with your final year grade, .

The same document may be duplicated in several places. The duplication of documents is convenient for retrieval and efficient.

However, it will be difficult to update multiple copies of same documents once the data has been modified. How does the data management provide the retrieval of data stored in different locations consistently, efficiently and reliably is a complicated task with multiple objectives.

One important open problem is how to make the systems load balancing with minimal update cost. Furthermore, how to make the systems be elastic for effectively utilizing the available resources with the minimal communication cost.

Providing effective techniques for designing scalable, elastic, and autonomic multitenant database systems is critical and challenging tasks. In addition, ensuring the security and privacy of the data outsourced to the cloud are also important for the success of data management systems in the cloud.

Topic 2: Adopting NoSQL for Big data managementBig data is well on its way to enormous. It has the great potential to utilize big data for enhancing the customer experience and transform their business to win the market.

Big data enables organizations to store, manage, and manipulate vast amounts of data to gain the right knowledge. Big data is a combination of data-management technologies evolved over time.

How does a company store and access big data to the best advantage? Are traditional DBs still the best option? What does it mean to transform massive amounts of data into knowledge? Obviously, the big data requirements are beyond what the relational database can deliver for the huge volume, highly distributed, and complex structured data. Traditional relational databases were never designed to cope with modern application requirements — including massive amounts of unstructured data and global access by millions of users on mobile devices that require geographic distribution of data.

In this research, we will identify the gap between Enterprise requirements and traditional relational database capabilities to look for other database solutions. We will explore the new technology NoSQL data management for big data to identify the best advantage.

We will gain an insights into how technology transitions in software, architecture, and process models are changing in new ways. Topic 3:Top-k queries in uncertain big data Effectively extracting reliable and trustworthy information from Big Data has become crucial for large business enterprises.

Obtaining useful knowledge for making better decisions to improve business performance is not a trivial task. The most fundamental challenge for Big Data extraction is to handle with the data certainty for emerging business needs such as marketing analysis, future prediction and decision making.

It is clear that the answers of analytical queries performed in imprecise data repositories are naturally associated with a degree of uncertainty. However, it is crucial to exploit reliability and accurate data for effective data analysis and decision making.

Therefore, this project is to develop and create new techniques and novel algorithms to extract reliable and useful information from massive, distributed and large-scale data repositories. Topic 4: Feature-based recommendation framework on OLAPThe queries in Online Analytical Processing (OLAP) are user-guided.

OLAP is based on a multidimensional data model for complex analytical and ad-hoc queries with a rapid execution time. Those queries are either routed or on-demand revolved around the OLAP tasks.

Most such queries are reusable and optimized in the system. Therefore, the queries recorded in the query logs for completing various OLAP tasks may be reusable.

The query logs usually contain a sequence of SQL queries that show the action flows of users for their preference, their interests, and their behaviours during the action. This research project will investigate the feature extraction to identify query patterns and user behaviours from historical query logs.

The expected results will be used to recommend forthcoming queries to help decision makers with data analysis. The purpose of this research is to improve the efficiency and effectiveness of OLAP in terms of computation cost and response time.

Professor Phoebe ChenScientific VisualizationTopic 1: Big Data analysis and ManagementThe challenges for big data analysis include investigation, collection, visualization, exploration, distribution, storing, transmission, and security. The development to big data sets is due to the additional information derivable from analysis of large set of related data and allow data correlations to be created to becoming useful information and knowledge.

This project will come across limitations due to big data sets in one of areas, including bioinformatics/genomics, multimedia, complex simulations, or environmental discovery. Topic 2:Forensic Applications of Bar CodesBar code readers are used in various applications ranging from supermarket checkouts to medical devices. Bar codes are also incorporated into exhibit labels and evidence bags.

Forensic applications of bar codes include `decoding' of damaged or partial bar codes on parts of suspected stolen vehicles. Work done by Barrett and Smith (Science & Justice Vol No.

3 2005) showed that it was possible to restore an altered barcode to its original state. This project will examine techniques to restore partial barcodes and develop a test to ensure results obtained are valid.

Topic 3: Using Computational Approaches for Adaptive Hearing This project we try to focus on speech-based correlates of a variety of medical situations using automatic, signal processing and computational methods. If such speech indications can be recognized and quantified automatically, this information can be used to carry diagnosis and treatment of medical circumstances in clinical settings and to additional deep research in thoughtful cognition.

This research will explore features extracted from the audio signal and research that will present some applied research that uses computational methods to develop assistive and adaptive speech technologies. Topic 4: Home-based Virtual Reality Systems to Support Physical Activity for Health and WellbeingRecently, Nintendo Wii (hand held remote controllers and Wii fit balance board) can support physical activity, movement, balance and health at home.

In this project, we would like to find out how this technology can help older people at home on - What virtual reality systems have been used for physical activity for people's health and wellbeing? What is the helpfulness of virtual reality systems on physical activity limitations?Topic 5: Sophisticated Diagnostic Medical ImagingImaging technologies such as Magnetic Resonance Imaging and Ultrasonography are allowing researchers the opportunity to investigate image structures. This will give us the chance to diagnostic disease and health thought image analysis.

Dr Scott MannTopic 1: DNA Packing Prediction in Prokaryotes Prokaryotes, single-celled organisms like bacteria, do not have an enclosed nucleus, therefore their DNA is floating around in the cytoplasm.

This project will use computational techniques to analyse DNA sequences to assess supercoiling in the context of packing large amounts of DNA and its implications for 3D structures. Machine learning, profile generation and statistical techniques are combined to generate a suite of predictive tools for the Bioinformatic community.

Topic 2: Visualisation of Comparative Genomics DataThe comparative genomics approach compares two or more genomes (the total heritable portion of an organism). Traditional visual presentations have centered on linear tracks with connecting lines to show points of similarity or difference.

In this project you will overlay large amounts of comparative data on a set of 3D surfaces which are controlled and interfaced by using human interaction, like the Xbox Kinect. Topic 3: Real-time Concept Feedback in LecturesThis project requires you to develop a web application that will be used by students and teachers to help determine how concepts are being understood by the class.

The intention is students will come to a lecture, open the class handout in digital form on their iPads/tablets/laptops and follow along with the lecturer. On the presenters screen, the lecturer will have a panel showing how the class is understanding the content they are teaching.

Significant literature analysis of existing techniques in this research area would be a feature of the project. Dr Naveen ChilamkurtiVehicular communicationsTopic 1: A cross-layer Architecture for improvement of H.

264 video transmission over Wireless LAN Networks. Recently, in an effort to improve the performance of wireless networks, there has been increased interest in protocols that rely on interactions between different layers of OSI layer architecture.

264/AVC video coding standard, proposed by JVT and ITU-T achieved a significant improvement in compression efficiency over existing standards. This project will focus on the transmission issues of H.

264 over WLAN (Wireless LAN) using QOS cross-layer architecture 6 Apr 2018 - Thesis title proposal for information technology students for information technology students , will give a craft talk as part of the MOOC. for Fiction and was published by the University of Massachusetts Press Dan's creative .

Thesis topics, department of computer science and information

Based on the QOS requirements of different data partitions in H. 264, a marking scheme can be designed at the MAC layer to improve the performance of video quality using H.

Topic 2:Performance and analysis of Wireless Multimedia Sensor Networks (WMSN)Recent advances in hardware techniques have fostered the development of Wireless Multimedia Sensor Networks (WMSN).

These networks are interconnected with devices such as video, voice and still images and are connected to a remote site for data and video analysis. The processing and quality of video and audio will be a challenging factor, especially with low powered sensor nodes.

Existing solutions, frameworks, and design implementations using test beds and simulations will be investigated. The more open issues and open research issues at the application, transport, network, link, and physical layers of the communication protocol stack are also investigated.

Topic 3: Adaptive FEC algorithms for Wireless LAN NetworksIn this Project, we used an adaptive FEC algorithm for wireless networks which is an extension of the traditional FEC schemes. The algorithm uses the advantages of traditional FEC schemes, which normally fixes the errors occurring within the information packages before they occur.

Using simulation experiments, our work has shown that the adaptive FEC algorithm improves the performance of the system by dynamically tuning FEC strength to the current amount of wireless channel loss. Topic 4: Vehicular communicationsThe development of Intelligent Transport Systems (ITS) brings the promise of improved load safety and comfortable/infotainment driving environments.

Recent advances of wireless vehicular communications supporting Vehicle-to-Vehicle (V2V) and Vehicle-To-Infrastructure (V21) communications have become a cornerstone of ITS. Wireless vehicular communications for ITS is one of the most interesting and active research topics, which is requiring vital efforts from both the industry and the academic.

In particular, studies on network routing and communication algorithm for V2V and V2I have posed various challenges. Most of current works on V2V and V2I communications mainly focuses on non-IP data communications.

These challenges require developing new network routing protocols and design communication algorithms, especially for IP data communications. Dr Somnath GhoshTopic 1: Implementation of a MAC layer Protocol for Synchronised Global Sleep Schedule for Wireless Sensor NetworksWireless Sensor NetworksThe rapid advances in the recent years in the areas of integrated circuit electronics, wireless communication and micro-electromechanical systems have led to the emergence of the wireless sensor network technology.

The 'Smart Dust' project at the University of California, Berkley, introduced the vision of self-configuring networks of inexpensive small (~1 mm3 ) nodes (with a processor, a radio transceiver and sensors) in a wide range of applications such as environment monitoring, health care and agriculture. Though the application domain, extant and envisaged, of wireless sensor networks (WSNs) is wide, there are a few common characteristics:WSNs are almost always single application systems.

The nodes in a WSN co-operate towards the goal of the application; the nodes do not compete for resources. The protocols used in a WSN, therefore, are designed with objectives which differ from the objectives of the protocols in other computer networks.

WSNs are usually randomly deployed (scattered/aerially dropped) and are self-configuring. The nodes discover their neighbours and build the topology distributed algorithms using local knowledge.

In order to keep the size and the cost of the nodes down, the nodes have limited processing power, memory and radio range.

However, the resource constraint which has the most significant impact on many WSNs is the constraint on energy. Many wireless sensor networks are deployed in locations where battery replacement is not feasible. A node has to be discarded when the battery depletes.

Energy scavenging may alleviate this problem in some sensor networks. Most WSN protocols are very conscious of the limited supply of energy, and try to conserve energy.

Energy efficient Medium Access Control protocolsA medium access control protocol allows the nodes in a neighbourhood (nodes within a radio range) to access the communications medium without interfering with each other. This may require monitoring communication in the neighbourhood, and communicating with neighbours even when no data is to be communicated.

As stated earlier, in a WSN, energy expenditure needs to be kept low while carrying out the activities required for medium access control. A common strategy for energy conservation in WSNs is to allow the nodes to turn off their radio systems (entering a sleep mode) periodically, as the radio in a WSN node is major energy consumer.

However, when two neighbouring nodes need to communicate, they both must be awake. One way to achieve this, as proposed in 1,2 is for one of the two neighbours to poll the other one to set up a rendezvous.

Another mechanism is to have all nodes in a neighbourhood to have a sychronised periodic sleep patterns proposed in 3,4 . The sychronised sleep pattern scheme proposed in 3 is used in the popular Mica and Telos motes commercially produced by Crossbow.

However, creating and maintaining a single sleep schedule requires overcoming a number of difficulties 5 , including the problem of designing a distributed algorithm for merging clusters of nodes following different sleep schedules. Proposed workThis project will involve implementation of a new MAC protocol with synchronised sleep schedule on TELOS-B WSN nodes (in a C-like language, NES-C, on the TinyOS platform), computing the parameters (control packet transmission times, propagation delays etc.

) for the implementation of the protocol, and then use these parameters for simulating a large WSN system (using OPNET) to evaluate the performance of the new MAC protocol. Power-efficient rendezvous schemes for dense wireless sensor networks. 2004 IEEE International Conference on Communications 2004 .

Optimizing Sensor Networks in the Energy-Latency-Density Design Space, IEEE TRANSACTIONS ON MOBILE COMPUTING, Vol 1, pp 70-80 2004 Wei Ye and Heidemann, J.

, Medium access control with coordinated adaptive sleeping for wireless sensor networks, IEEE/ACM Transactions on Networking.

Vol 12, pp 493-506 2004 Wei Ye and Fabio Silva, and John Heidemann, Ultra-Low Duty Cycle MAC with Scheduled Channel Polling, Proceedings of the Fourth ACM SenSys Conference 2006 Ghosh, S. , Performance of a Wireless Sensor Network MAC Protocol with a Global Sleep Schedule, International Journal of Multimedia and Ubiquitous Engineering, Vol.

Zhen He will not be available this year to supervise any honours/masters thesis topics.

Richard LaiRequirement Engineering and UML specificationThe projects that I offer are suitable to students who have a keen interest in software engineering and software project management. Those who get a good grade in Software Engineering project, or SNM or SRT are encouraged to discuss the potential projects with me.

Topic 1: Software Sizing: UML approachSoftware size is important in the management of software development because it is a generally reliable predictor of project effort, duration, and cost. According to the Software Engineering Institute's (SEI) Capability Maturity Model (CMM), size is recommended as one of the most fundamental measurements beginning as early as Level 2.

In the past two decades, counting number of Source Line of Code (SLOC) and Function Points (FP) have been dominating software sizing approaches. Software size is a key input to all software cost estimation models.

For example, SLOC has been used as the primary size input in many cost estimation tools such as COCOMO. Unfortunately, there are significant drawbacks in SLOC and FP sizing for software estimation.

For example, SLOC can only be accurately counted when the software construction is complete, while the most critical software estimations need to be performed before construction. The FP can only be manually counted, and the estimator has to have special expertise and experience to do so.

Furthermore, FP counting involves a degree of subjectivity. Facing these challenges, researchers are looking for faster, cheaper, and more effective methods to estimate software size.

This project is to investigate the use of UML as a software sizing technique. Topic 2: Software Process Improvement for Component-Based Software EngineeringIn the last few years, the software engineering community has witnessed the growing popularity of Component-Based Development (CBD), refocusing software development from core in-house development to the use of internally or externally supplied components.

Component-Based Software Engineering (CBSE) as an emerging discipline is targeted at improving the understanding of components and of systems built from components and at improving the CBD process itself. The field of Software Process Improvement (SPI), and in particular of assessment-based software process improvement, shares very similar goals to CBSE – shorter time-to-market, reduced costs and increased quality – and provides a wide spectrum of approaches to the evaluation and improvement of software processes.

Although this discipline has made considerable advances in the standardization of these approaches (e. ISO 15504) as well as of the underlying process models, it generally lacks tailoring and customisation to CBSE, e. with respect to terminology or the adequacy and granularity of the underlying process or assessment models. This project is to investigate how SPI can be applied to CBSE.

Topic 3: Requirement Engineering for Component-Based Software SystemRequirements Engineering (RE) consists of eliciting stakeholders need, refining the acquired needs into non-conflicting requirement statements and validating these requirements with stakeholders. New challenges for RE process are introduced by CBSD as component selection is a key issue of it.

Components are designed according to general requirements. As such, the needs of stakeholders should be continually negotiated and changed according to the features offered by components.

In addition, CBSD requirements need not be complete as initial incomplete requirements can be progressively refined as suitable components can be found. The RE process of current CBSD is mainly driven by availability of software components.

It reduces the scope of requirement negotiation; and makes it difficult to address quality attributes and system level concerns. In addition, components are selected on individual bases which make it difficult to evaluate how components fit in with the overall system requirements.

Therefore, it is necessary that CBSD should be driven by stakeholder's requirements. CBSD requirements are collected as high level needs, and are then modelled by identifying the importance of each need.

Each need is identified as mandatory, important, essential or optional. This project is to investigate into having a systematic process of refining these requirements by specifying candidate components.

Dr Fei LiuWeb SemanticsTopic 1: Word Sense Disambiguation based Query ExpansionYou may wonder when we type the question "What is the weather today in Melbourne?" how does Google Search Engineer figure out the meaning of your question and present you with an accurate answer. This is the research of Question-Answering Systems (QA Systems).

A QA system converts a user's query into a sequence of key words, conducts web search using the keywords, and identifies the most proper text segment as the answer to the query. The research project is to analyse the query being entered by the user, to expand it by adding synonyms, to identify the key words within the query, and finally to decide the precise meaning of each key word.

Word Sense Disambiguation techniques will be applied to the research. Topic 2: Connectives and Phrases based Sentiment Analysis The World Wide Web contains a huge number of documents which express opinions, containing comments, feedback, critiques, reviews and blogs, etc.

These documents provide valuable information which can help people with their decision making. For example, product reviews can help enterprises promote their products; comments on a policy can help politicians clarify their political strategy; event critiques can help the involved parties reflect on their activities, etc.

However, the number of these types of documents is huge, so it is impossible for humans to read and analyse all of them. Thus, automatically analyzing opinions expressed on various web platforms is increasingly important for effective decision making.

The task of developing such a technique is sentiment analysis or opinion mining. In this project, we attempt to analyse the sentiment orientation of a sample by identifying the connectives and phrases in its text.

As a result, the keyword which expresses the sentiment orientation of the author can be identified. The method is to be compounded with classical analysis methods (machine learning based or clustering based) to achieve a higher accuracy.

Topic 3: Natural Language Independent Knowledge RepresentationThe purpose of the research is to establish a new scheme in knowledge representation – Natural Language Independent Knowledge Representation. The scheme aims to represent a datum (a concept) by using its relation with other concepts, and therefore the representation can be language independent/semi-independent.

A concept can be implemented as a class in JAVA programming language. The class hierarchy can be established through the inheritance relationship.

Attributes in the class define the relations between concepts. The scheme can be applied to Natural Language Processing, Sentiment Analysis and Question-Answering Systems to serve as a tool for identifying the precise meaning of a word, and consequently to achieve Word Sense Disambiguation.

Dr Kinh NguyenTopic 1: Managing Statistical Survey Data through Conceptual ModelingSurveys, or questionnaires, are a very common means to obtained information in scientific and social investigations. Typically, the data are entered into a number of data files (e. text files or excel files), and then the data in the files are fed into some statistical package for statistical analysis.

As the work progressed, to test some hypotheses, or to perform some exploratory analysis, new data files often had to be prepared Slides from the information session for students commencing a thesis in Semester thesis projects are offered by individual academics (and often align with that .

This approach is very time-consuming and error-prone. This thesis proposes and investigates a new approach, in whichWe build a model of the data contained in the survey.

This model captures all the information contained in the survey, where each piece of information is modelled as a fact. We then derive (from the fact model) a schema for a relational database and store data in the database.

Then for any statistical analysis to be done, we simply extract the required data, in its required tabular form, using standard SQL, and feed them to the statistical package. In fact, this project serves two related purposes: (1) To systematically ensure that we can store all the relevant data and maximize their use for statistical analysis, and (2) To construct an ontology of the related field to improve the concepts and their relationships in that field.

Topic 2: A Critical Study and Evaluation of the SBVR StandardSBVR (Semantics Business Vocabulary and Rules) is the comprehensive standard for defining the vocabulary and rules of application domains. That is, the aim of SBVR is to capture and represent all the business concepts (vocabulary) and all the business rules.

The importance of business rules is that they drive the business activities and they govern the way the business software system behaves. In other words, the concepts and rules captured by SBVR represent the business knowledge required to understand the business and to build software systems to support the business.

So far, the adoption of the SBVR Standard for practical application development has been slow. The aim of the thesis is to study the SBVR standard in depth, to survey the works that have been published since the release of the Standard, and to critically evaluate the applicability of SBVR to practical information system development.

Topic 3: Converting Business Rule Models to UML (Object-Oriented) Models: An Application of Meta ModellingBusiness rules are the most important factor that determines the structure and behavior of an information system. A dominant standard, known as SBVR (Semantic Business Vocabulary and Rules), has been developed to express business rules.

The aim of the thesis is to automate the conversion of an SBVR business rule models into an object-oriented UML business model. This is a very important task for building business-rule-driven information system.

Typically, the process for building such a system starts with building an SBVR model, and then translates that model into a UML model, which is more suitable for practical implementation. The approach proposed for this thesis consists of the following steps: (1) Build a formal model for SBVR; (2) Build a formal model for UML; (3) Formulate transformation rules to transform a SBVR model into a UML model; (4) Implement a system to automatically translate an SBVR model into a UML model.

Topic 4: Design and Implementation of Web Services for Information SystemsThe aim of web services is to make data resources available over the Internet to applications (programs) written in any language. There are two approaches to web services: SOAP (where "SOAP" stands for "Simple Object Access Protocol) -based and RESTful (where "REST" stands for "Representational State Transfer).

RESTful Web services have now been recognized as generally the most useful methods to provide data-services for web and mobile application development. The aim of the thesis is to study the concept of RESTful web services in depth and to construct a catalogue of patterns for designing data-intensive web services.

The aim of the catalogue is to act as a guide for practical design of web services for application development. Dr Eric PardedeTopic 1: Using Case Based Recommended System for Subject Selection The rationale behind this research is a need for a practical system that can be used by students to select subjects during their study.

While the advice of course coordinator and the short description of the subject in the handbook are most frequently used by students to make up their mind, they can make more informed decisions by using experience of past students. In this thesis, the student will use Case Based Reasoning (CBR) to design and develop a recommender system for subject selection in higher education context.

The research component of this project is the identification and validation of the CBR approach and its parameters for the recommendation system. NOTE: This topic can also be used for CSE4CPA/CPBTopic 2: Detecting Improper Users' Behaviors in Online Social NetworksOnline Social Networks (OSNs) give benefits for daily life activities.

They also bring with them various risks by facilitating improper users' behaviors. In this study, the student will select one type of improper behaviors in OSNs (cyber-bullying, cyber-stalking, hate campaign etc.

), and using an available technique such as sentiment analysis to avoid or counter attack the behavior. The outcome of this research is a strategy or a policy that can be considered by OSNs providers.

Topic 3: Using Educational Technologies to facilitate Constructive Alignment in Subject Design. Constructive alignment (CA) is a subject design concept used in higher education sector.

The idea is to identify three basic components of a subject (Intended Learning Outcomes (ILO), Teach Learning Activities (TLA), and Assessment Tasks (AT)) and to integrate them into a cohesive alignment with student learning as the ultimate goal. In this thesis, the student will review educational technology methods and tools that have been used in higher education sector.

Based on the benchmarking activity, the student will identify the most appropriate methods and tools that can enhance CA for a particular subject design (most likely subject in IT/Engineering course). Topic 4: Mapping Relational Database (RDB) Schema to NoSQL Database Schemas and Query Rewriting of SQL to NoSQL QueriesWith the increasing usage of NoSQL databases in many applications, there is a tendency that existing data stored in RDB to be converted into NoSQL structure.

Since there are several families of NoSQL Database (key value, column, document, graph), the mapping of RDB Schema to NoSQL Database schema is not straight forward. Different family of NoSQL database will treat constraints in RDB differently.

The outcome of this thesis is to propose mapping rules of RDB to NoSQL database schema. In addition, once the mapping rules are established, the research can also be extended by proposing the query rewriting of SQL in RDB into NoSQL queries.

Dr Mahardhika PratamaTopic 1: Advanced Evolving Intelligent System Data stream mining is today one of the most challenging research topic, because we enter the data-rich era. This condition requires a computationally light learning algorithm, which is scalable to process large data streams.

Furthermore, data streams are often dynamic and do not follow a specific and predictable data distribution. A flexible machine learning algorithm with a self-organizing property is desired to overcome this situation, because it can adapt itself to any variation of data streams.

Evolving intelligent system (EIS) is a recent initiative of the computational intelligent society (CIS) for data stream mining tasks. It features an open structure, where it can start either from scratch with an empty rule base or initially trained rule base.

Its fuzzy rules are then automatically generated referring to contribution and novelty of data stream. In this research project, you will work on extension of existing EISs to enhance its online learning performance, thus improving its predictive accuracy and speeding up its training process.

Research directions to be pursued in this project is to address the issue of uncertainty in data streams. Topic 2: Machine Learning Algorithm for Online Big Data AnalyticsThe era of big data refers to a scale of dataset, which goes beyond capabilities of existing database management tools to collect, store, manage and analyze.

Although the big data is often associated with the issue of volume, researchers in the field have found that it is inherent to other 4Vs: Variety, Velocity, Veracity, Velocity, etc. The so-called MapReduce from Google is among the most widely used approach. Nevertheless, vast majority of existing works are offline in nature, because it assumes full access of complete dataset and allows a machine learning algorithm to perform multiple passes over all data.

In this project, you are supposed to develop an online parallelization technique to be integrated with evolving intelligent system (EIS). Moreover, you will develop a data fusion technique, which will combine results of EIS from different data partitions.

Topic 3: Metacognitive Scaffolding Learning Machine Existing machine learning algorithm is always cognitive in nature, where they just consider the issue of how-to-learn. One may agree the learning process of human being always is always meta-cognitive in nature, because it involves two other issues: what-to-learn, when-to-learn.

Recently, the notion of the metacognitive learning machine has been developed and exploits the theory of the meta-memory from psychology. The concept of scaffolding theory, a prominent tutoring theory for a student to learn a complex task, has been implemented in the metacognitive learning machine as a design principle of the how-to-learn part.

This project will be devoted to enhance our past works of the metacognitive scaffolding learning machine. It will study some refinements of learning modules to achieve better learning performances. Topic 4: Advanced Evolving Intelligent System in a Complex Manufacturing IndustryUndetected or premature tool failure may lead to costly scrap or rework arising from impaired surface finishing, loss of dimensional accuracy or possible damage to the work-piece or machine.

The issue requires the advancement of conventional TCMSs using online adaptive learning techniques to predict tool wear on the fly.

The nonlinear and uncertain nature of machining processes presents very complex issues to be resolved by both academia and industry, because of the use of multi-point cutting tools at high speed, varying machining parameters, and inconsistency and variability of cutter geometry/dimensions Honours and Masters by Coursework thesis topics for students interested in further La Trobe University >Department of Computer Science and Information Technology Students should discuss their topic of interest with the respective staff .

The cutting-edge learning methodologies developed in this project will pioneer frontier tool-condition monitoring technologies in manufacturing industriesTopic 5: Online Text Classification Using Advanced Evolving Intelligent SystemToday, we confront social media text data explosion. From these massive data amounts, various data analytic tasks can be done such as sentiment analysis, recommendation task, web news mining, etc.

Because social media data constitute text data, they usually involve high dimensionality problem. For example, two popular text classification problems, namely 20 Newsgroup and Reuters21578 top-10 have more than 15,000 input features.

Furthermore, information in the social media platform is continuously growing and rapidly changing, this definitely requires highly scalable and adaptive data mining tools, which searches for information much more than the existing ones used to do – evolving intelligent system. The research outcome will be useful in the large-scale applications, which go beyond capabilities of existing data mining technologies.

This project will not only cope with the exponential growth of data streams in the social media, but also will develop flexible machine learning solution, which adapts to the time-varying nature of the social media data. Professor Wenny RahayuNEW (This topic is supervised together with Dr.

Kayes)Big data is too large, dynamic and complex to capture, analyse and integrate by using the currently available computing tools and techniques.

By definition, it can be characterized by five V's: volume, velocity, variety, veracity and value. Big data collection, integration and storing are the main challenges of this project as the integration and storing of big data requires special care.

Consequently, it is necessary to prevent possible data loss in between the collection and processing, as big data always comes from a great verity of sources, including the high volume of streaming data of dynamic environmental data (e. , data from the IoT devices or sensors), which by nature has the dynamically changing characteristics. As such, it opens new scientific (research) directions for the development of new underlying theories and software tools, including more advanced and specialized analytic.

However, most of the big data technologies today (e. , Hadoop) lack sufficient techniques and tools to integrate big data from heterogeneous sources, build a single repository, and ensure the availability of correct and updated data at any time. For example, the volume of big data is too large to load into a desktop's screen/memory, fit in standard database, and analyse/handle by the traditional database/software tools.

In order to integrate big data from various sources with different variety and velocity and build a central repository accordingly, it is increasingly important to develop a new scientific methodology, including new software tools and techniques. In particular, the main focus of this project is to capture, analyse and integrate big data from different sources, including dynamic streaming data and static data from database.

Towards this end, Government data can be used to analyse and develop applications and tools which can ensure benefit to the society. For example, provides public access to government big data, including open datasets.

Topic 2: Developing a Secure Electronic Health Service (Possible a Mobile Health Service) to Exchange HER (Private and Sensitive Data)NEW (This topic is supervised together with Dr. Kayes)In recent years, electronic health services are increasingly used by patients, healthcare providers, healthcare professionals, etc. , hospital doctors, nurses, researchers) with the growth of the dynamic ICT environments.

Healthcare consumers and providers have been using a verity of such services via different technologies such as desktop, mobile technology, cell phone, smartphone, tablet, etc. For example, eHealth service is used in Australia to store and transmit the health information of the users in one secure and trusted environment.

However, security is still a big challenge and central research issue in the delivery of electronic health services. In particular, it is essential to have the security mechanisms and policies for the exchange of electronic health records (EHR) between patients and healthcare providers/professionals.

, when a patient's health condition is critical), how can an electronic health service allow the responsible healthcare professionals to access a patient's necessary EHR (e. , health history, private health information)? Moreover, healthcare professionals with different access rights should also be monitored for a further security measure without affecting the healthcare workflow. In addition to security issue, privacy is also a concern that should neo be compromised, especially when there is a need to ensure security.

For example, how can a patient be sure that his privacy will be protected by healthcare providers/professionals? How can an electronic health service ensures selective sharing of health information which can be derived from the EHR and approved by relevant patients/healthcare professionals. Topic 3: Building a Collaborative Online Decision Support Systems (DSS)The main aim of this project is to enable online right-time data analysis and statistical functions to generate the different reports that are required for collaborative decision making.

In a collaborative system, the different users/ organisations that form the collaboration will be able to make on the fly decisions based on the most up-to-date reporting (e. in cases such as natural disasters bush fires, tsunami, etc. This collaborative DSS will be built on an underlying integrated data repository which captures the different data sources relevant to the different organisations in the collaborative environment. Within the DSS, some measurements relevant to individual organisation (e.

a certain KPI -Key Performance Indicators) may be embedded in addition to organisation performance information.

The qualitative measures specify within the KPI will be mapped to numeric rankings inside the decision support system to analyse the level of current performance and to identify the most appropriate level of future commitments that are required from each participating organisations. The main focus of the collaborative decision support system is the availability of heterogenous consolidated data at the right time and right place.

Topic 4: Query Optimization for Spatio-Temporal Data WarehouseWith the increase popularity large heterogenous data repository and corporate data warehousing, there is a need to increase the efficiency of queries used for analysis. This case is even stronger in database environment that holds both spatial and temporal information.

Spatio-Temporal data includes all time slices pertinent to each object or entity. For example, an area/suburb in a city may have static properties which include area name, description, population, etc.

However, for each particular area there will be spatial information (coordinates, shape, etc) and time slice when a set of values for the above properties are valid. In the example above, an area may change shape/size (eg.

enlarged to cover more spaces) in the next few years due to some government policy, while still being the same area name. The main focus of this topic is to investigate the ways to optimize queries that are used to analyse the above spatio-temporal data. MAKING THE UNKNOWN KNOWN-FORMALISING THE DISCOVERY OF DESIGN KNOWLEDGEThere is a famous one liner by Donald Rumsfeld. One of the big problems faced by designers is "out there" that we don't know about, but is relevant to the design?" That is, what are the unknown unknowns?So, what does this mean for system development and design? Can this be formalized? Do we do it already? Where does Domain Expertise come into this?This sounds hard, but, may not be, and, if you can do it well, it is publishable. The New World of Real World SystemsTopic 2: HOW SHOULD WE DESIGN SYSTEMS THAT HAVE TO SURVIVE FOR THE NEXT 50 YEARS?Technology is changing, however, the small system you build today may still be in use in 50 years from now.

OK, you are the Government of a country developing a new Social Welfare system 2 . What exactly does this mean? How would you do this?How serious is this problem? What would make it possible? What would make it difficult?A good version of this will get published. Very expensive systems survive for decades even if or especially when they are mission critical. How do we cope with this in the above context?FACT 3. We know a lot about component based design, software re-use and related issue.

How do we bring all this together so that systems can deal with change?Web Browser technologyTopic 3: WEB SEARCH NAVIGATION AND SUPPORT SYSTEM (become famous)In today's world, web searches are major activity undertaken by people for industrial, research and other reasons. They involve searches across a very wide range of web pages in a wide range of sources.

The searcher may down load pages, extract information from pages, and, in the process, create a history of link activations. The problem people face is what happens if the searcher has to stop, and resume the process days later.

Purpose of this project is to provide support for people using Google as a search engine. A major problem here is keeping track of the sites and documents visited and viewed, and downloaded.

Search's may be spread over several sessions, and users need to be able to resume a project requires the development of software and the design of a complete, zero adoption cost tool. Topic 4: Essential knowledge for web-site developersGiven the current IT situation, web-sites are likely to be a major aspect of businesses and organisations for the next 50 years.

What are the categories of knowledge that web-site developers should have, if this new business aid is to be genuinely socially useful aid, rather than a public nuisance?The goal here is the development of degree program, and, the documentation of knowledge collections needed.

Topic 5: BROWSER SEARCH PRECISION IMPROVEMENTHow often does a Google search produce stuff that seems to have no relationship to what you really wanted!?The purpose of this project is to find some simple means of improving browser precision. That is, I want to find only items which are really useful to me.

To do this, we need to first explore the current query systems, and document them. Then to propose means of getting simpler results, and to implement a prototype.

You will need to develop a knowledge of "data mining", to extract classification material from the returns from browsers. Topic 6: Automatic Web-page Link Clicking MinimisationLots of sites I use need several links to display/access very simple information.

So, I seem to spend ages linking around hyperspace to see information which would easily fit on one page. Could I build a tool which would allow a user to define a new, single page, that had all the data concerned? GAMES TECHNOLOGYTopic 7: MEASURING IMAGE QUALITY OF GAMES USING DIGITAL CAMERA APPROACHESThe digital camera industry has put a lot of work into image quality assessment, both subjective and objective.

Image quality is of course a major concern in the gaming industry, however, they face the problem of high-speed image generation, rather than simply recording images. At the same time, there are now medium resolution (14MP and above) digital cameras that can capture (and, process) up to 10 or even 14 frames per second, at full resolution.

As I said, the digital camera domain has various measure of image quality. How do they map onto the needs of games, or don't they? If not, what should we do?ZAIA projects-Zero Adoption Impact ApplicationsTopic 8: Design Rules for ZAIA ApplicationsOK, how much time do you waste learning to use a new software package? And, how many computer systems that you know of invisible, or nearly invisible in the sense that they assist you BUT, don't intrude on your non-computer work patterns? Simple examples that you may be familiar with are ABS, Traction Control, automobile engine management systems.

But, what other ones can you think of? Of course, this sounds like ubiquitous computing, however, we are going beyond this. Our goal is the production of systems which can be installed in a work environment, either computerised or not, and have almost zero learning effort, but, which will make life easier.

What should the design-rules look like for a system of the ZAIA type look like? One way of doing this would be to design and demonstrate such a tool, such as tabbaseTopic 9: ULTRA USER FRIENDLY LIBRARY SYSTEM TASK ORIENTED COMPUTINGCurrently, library catalog systems place a massive load on users who must make many steps to locate an item. In practice, users identify a "Target", that is, in three ways.

They have a complete citation for the Target obtained from a publicationThey have a Google search result and wish to obtain the TargetThey don't know what they are looking for, so, they are making a key-word based search. For a start, we are interested in the first case.

How can we make this VERY easy for the user, so that they provide the citation, and, are told "here is the item", or "here is where it is, but, you need to buy it", without any(or only under weird cases) intervention by themselves?Thie project may involve collaboration with the LTU library. CLOUD COMPUTINGTopic 10: The Impact of Cloud Computing on Component Based DesignExactly how can a Cloud based systems be used in Component Based design? Develop design rules, and show some case-studies.

01 TEST FRAMEWORKS FOR CLOUD COMPUTINGSoftware TestingTopic 11: IMPROVING SOFTWARE QUALITY BY CONSTRUCTING OPERATIONAL PROFILES BASED ON BLACK-BOX TEST RESULTSIt has been suggested by the author that one way of improving product quality is by building a wrapper around a system that blocks those case that were found to be handled incorrectly during testing.

This project investigates this idea and proposes and demonstrates deign rules and tools for achieving this for different classes of software product. Topic 12: TEST FRAMEWORKS FOR CLOUD COMPUTINGHow will developers test cloud applications? What exactly will the changes to Black Box testing for example be for clouds? This project requires analysis of both Cloud systems and testing and also of the way in which Clouds will work?The thesis topic is to survey this field, define the problem, and produce a simple tool, if possible.

Topic 13: TEST HARNESESS - DERIVING WHITE BOX COVERAGE ANALYSERS FROM MACHINE-READABLE DESIGNS. In the past, people have built systems that tracked the execution of a system, checking that the actual procedures/classes etc.

executed are those presumed to be the ones that should be executed for a particular test case. In addition, the value of data at specific points in the execution process might be checked against expected values.

However, this requires that there is a close correlation between the design representation and the code. That is, it needs to be possible to easily (and preferably automatically), to generate the "hooks" in the code that will make this possible.

The tester wants to be able to specify a test script in terms of the names used in the design, and have the harness execute the code, doing the required checking. The thesis topic is to survey this field, define the problem, and produce a simple tool, if possible.

Specification SystemsTopic 14: UNAMBIGUOUS, INFORMAL SPECIFICATION TECHNIQUESDiscussions of specification capture raise the issue of a suitable language for system specification of software systems. This must be sufficiently unambiguous for designers to be sure that a specification has been captured, and sufficiently informal for users to agree that they understand what has been achieved.

The goal is to develop a suitable language and to discuss and formalize the issues involved. Topic 15: THE USE OF INVIVO/NUDIST IN REQUIREMENTS ANALYSISNUDIST is a tool developed by QSR, a La Trobe SPIN off company started by Lyn and Tom Richards (Tom was a Reader in this Dept.

Its use is to analyse qualitative data, to find common ideas and threads of evidence.

This project would look at its use in Requirements Engineering Information Technology Services (ITS) assists The University of Iowa Graduate only English versions of MS Word are compatible with the UI Thesis Template..

1 effects of information and communication

Re-use is the process of using existing components to fabricate a system. A major part of the problem (apart from the issue of the existence of re-useable components), is the problem of classifying and then retrieving the components.

Much emphasis is often placed on the classification of the components. However, experience is that components can be hard to retrieve since the classifications do not always match either the application domain or the possible purpose proposed for the module.

Alternatively, there may be some implied re-use possible which is not discovered. One possible explanation for this may be that the form of classification used, the language, may either be too restrictive, or, that there do not exist appropriate mapping's from the classification language to the problem space in which the component could be used.

Part of this could be due the absence of suitable "Universes of Discourse", i. , commonly understood meanings which are not stated explicitly. Another could be that specifications may imply their inverses, or, that common functions can be deduced by expanding a specification, making use of the "Universe of Discourse" that is valid for that point in the design.

The purpose of this project is to try to identify some method which might be used to address these issues. It would not be expected that a complete solution would necessarily be discovered.

Project Management and Process RecordingTopic 17: Recording process enactments in student team projects (with Dr Torab Torabi)Student software engineering projects such as PRJ involve different projects each being undertaken by more than one team. This means that there may be different process models used, and, multiple instances of similar process models being applied independently by multiple, independent teams.

The purpose of the topic is to develop a formal plan for capturing process execution data, and, for its analysis. You will need some familiarity with statistics, however, the results would be REALLY important.

You will need to do a literature survey on process recording AND experimental software engineering. Good results will be publishable, and, the model could become widely used.

Topic 18: THE ROLE OF ONTOLOGY CONSTRUCTION IN SOFTWARE DEVELOPMENT PROJECTSIt has been said that software projects are often a process of knowledge gathering.

However, this process seems to be covert rather than overt. In addition, the process of construction of taxonomies is quite well known, but the extent to which it plays a role in software projects in terms of domain knowledge is probably known covertly, but is not remarked upon.

The purpose of this project is to examine the knowledge acquisition activities in software development and to see how they may be described as taxonomy construction exercises. MISCELLANEOUS SOFTWARE ENGINEERINGTopic 19: Prescriptive Taxonomy-based Methods for GQMThe Goal/Question/Metric paradigm for constructing programs for measuring software (quality, performance) was developed by Basili in the early 1980's.

The idea is simple, and has been the subject of many papers and some tool development. However, in practice, the process of developing questions that lead to metrics is extremely difficult to describe.

My view is that the problem may be based upon a knowledge acquisition process which may assisted by taxonomies of the application and measurement domains. Topic 20: The use of taxonomies in safety critical systems designSuccessful SCS implementation depends upon the designers ability to interpret the spec.

, and to identify unexpected behaviours implied by that spec. Alternatively, we need to ensure the behaviour is predictable given unexpected inputs.

However, since the behaviour/input is unexpected, by definition, the people writing the spec. Is it possible that taxonomies may help with this? Obviously, we need some processes that expose possible fault conditions in terms of an external event that was not foreseen, and hence was not considered or checked. As an example, on July 25th 2000, a Concorde taking off Charles De Gaulle airport in Paris crashed killing all on board and four people in the hotel it hit.

The aircraft's tyres hit a piece of metal that had fallen from a DC10 that had departed earlier. This caused the tyres to rupture and fly into the air and rupture the fuel tank in the wing causing large fire.

Would it be reasonable for designers to ensure the fuel tanks are not ruptured by burst tyres? I don't know if it would be.

Would it be reasonable to design the fuel tanks so that they would not be punctured by a 50cal machine gun round? Possibly. If this WAS a design requirement, then it would probably mean the fuel tank would survive the debris from a burst tyre.

Topic 20: safety critical systems (SCS) designers build complex systems reliably and with low error rates. Can their tecniques be used for general system development?We know that generally, SCS are one of software development's success stories.

Sure, there are problems, however, the SCS developers do very well indeed. Can these techniques be used to normal" system development?Dr Andrew SkabarTopic 1: Document Clustering and ClassificationThe management of unstructured text-based data (i.

, data not amenable for storage inside a relational database – emails, faxes, web pages, etc.

) is a major problem pervading the information technology industry. It is widely accepted that at least 80% of the data held by companies is unstructured.

Over recent years there has been a growing interest in creating automatic systems that assist users in managing documents such as emails. This project provides scope for students to learn about different facets of dealing with unstructured (text) data, and in particular, about how clustering and classification techniques can be successfully applied to it.

To take this topic, students must have received a strong mark in CSE2AIF, and one or more of CSE3ALR and CSE3CI. Students should also have some background in both the Python and Matlab computer programming languages.

Topic 2: Text processing incorporating semanticsWhereas information retrieval is typically conducted on text at the document level, in recent years researchers have become increasingly interested in also dealing with shorter segments of text; e. One of the difficulties of dealing with sentence-level text is that the similarity measures typically used at the document level (i.

, measures such as Cosine similarity, which are based on a vector-space representation), are not applicable to analysing sentence-level text due the lack of word co-occurrence.

Consequently, it is usually necessary include semantic information provided by way of Wordnet or other lexical resources. This project provides scope for students to learn about different facets of semantic text processing, and to apply appropriate techniques to some problem of the student's choosing.

To take this topic, students must have received a strong mark in CSE2AIF, and one or more of CSE3ALR and CSE3CI. Students should also have experience with the Python programming language, and preferably also Matlab. Topic 3: Open Project in Data MiningData mining is the process of sorting through large amounts of data in search for novel and useful information that can be used to aid decision making.

It is often used in business intelligence and financial analysis systems, but is increasingly being used in the sciences to extract information from the enormous data sets generated by modern experimental and observational methods.

Example of thesis proposal for information technology - myocv

" This project provides wide scope for students to apply or develop data mining techniques for some domain (i. Students are invited to propose a domain and to discuss this with the supervisor.

To take this topic, students must have received a strong mark in CSE2AIF, and one or more of CSE3ALR and CSE3CI. Associate Professor Ben SohInformation System Research, Design & Technology, and Cloud ComputingFault-Tolerant Computing, and Information & Networking SystemsBusiness Process and Workflow & Supply Chain ManagementNote: Details of each project will be finalised prior to signing up. Topic 1: Wireless Communications, and Mobile and Pervasive ComputingFoundations, Standards, Protocols and AlgorithmsPervasive Infrastructures, Services and ApplicationsWireless Sensor and Ad Hoc Networks on: (1) Security & Privacy, and Reliability; (2) Intrusion Detection and Error Control; and others.

Topic 2: Information System Research, Design & Technology and Cloud ComputingInternet-Based Technology relating to: (1) Cloud Computing; (2) Web Availability and Reliability; (3) Fault-Tolerant IS, (4) IS Research. Web Intelligence focusing on: (1) Information Filtering and Retrieval; (2) Searching and Browsing; (3) Data Storage with Grid Intelligence; (4) Backend Database Security, Backup & Recovery.

Topic 3: Performance and Security & Safety in Information, Database & Networking SystemsSecurity Foundations with regard to: (1) Information System Authorisation & Access Control; (2) Intrusion Detection and Prevention; (3) Cryptography and Secure Communications; (4) Information Forensics, Recovery and Healing; (5) Database Security and Backup & Recovery; and (6) Information Security Risk Management. Security and Safety in Information, Database & Networking Systems and Cloud Computing regarding: (1) Security & Privacy; (2) Trust Management and Security; (3) Web and Web-Services Security; (4) Security and Safety in Ad Hoc and Sensor Networks.

Topic 4: Business Process and Workflow & Supply Chain ManagementBusiness Process and Workflow & Supply Chain Foundations in terms of: (1) Modelling and Design Techniques; (2) Implementation and Language; (3) Interoperability. Business Process and Management relating to: (1) Security Control; (2) Dynamic Workflow Control; (3) Service-Oriented Computing; and others.

Dr Torab TorabiTopic 1: Maritime SimulationThe research will investigate the use of AI techniques for better path prediction or accident avoidance or port management. The outcome is preferably will be able to feed in and out information to the simulation environment.

The research will preferably will be with the Maritime Simulation Lab in La Trobe. Topic 2: Dynamic Context and ApplicationsIn services with huge data, information is updated very frequently, though each user may need part of the information within certain context.

This research is set to research concept of Dynamic Context. The main focus will be on conceptual specification of the context.

Once such a model has been formally defined, it should be able to apply this to large system with many stakeholders with different information needs. Topic 3: Task ComputingUsers in different environments may use different smart devices to complete their activities (tasks). This aim of this research is to investigate and develop optimum methods for users to make use of smart devices in their surroundings to accomplish their tasks.

This project will require knowledge of device communication and mobile programming. Topic 4: Moving Objects / Clusters and ApplicationsThe aim of research is to investigate and develop methodologies for moving objects or cluster and provide prediction and decision support in applications such as disaster management.

The research uses the standards for spatial and weather to integrate information in different domain and provide current context and prediction of objects or clusters in such environments. This research requires knowledge of XML and standards in weather and good knowledge of programming.

Dr Prakash VeeraraghavanNext Generation ProtocolsTopic 1: Power aware routing in mobile NetworksThe energy-efficiency problem in wireless network design has received significant attention in the past few years. Most of the works are for designing efficient routing schemes because traditional routing schemes designed for the Internet tend to consume more power.

The objective of this research study is to design a new power-aware routing scheme for ad hoc and sensor networks. (This topic is abstract in nature and requires a good aptitude for mathematics)Topic 2: Security in cloud storage "The two biggest concerns about cloud storage are reliability and security.

Clients aren't likely to entrust their data to another company without a guarantee that they'll be able to access their information whenever they want and no one else will be able to get at it". Thus it is very important to ensure that proper encryption algorithm is in place to protect the data access.

In this project the student is expected to review the literature and come out with an efficient and light-weighted algorithm for protecting the data. Topic 3: Audio Compression and Watermarking schemesUnlike data compressions which are mostly lossless, audio compression are mostly lossy.

Depending upon the type of equipment used for reproduction, different audio compression codec (like MP3, RM etc. It is relatively a new technique to watermark along with compressing audio. The objective of this research project is study various audio compression schemes, their performance over different networks and the strength of the watermarking schemes used for ownership.

(This topic is abstract in nature and requires a good aptitude for mathematics and music)Topic 4: Scheduling in BiosensorsA biosensor is an analytical device for the detection of an analyte that combines a biological component with a physicochemical detector component. It consists of 3 parts:The sensitive biological element (biological material (eg.

tissue, microorganisms, organelles, cell receptors, enzymes, antibodies, nucleic acids, etc), a biologically derived material or biomimic) The sensitive elements can be created by biological transducer or the detector element (works in a physicochemical way; optical, piezoelectric, electrochemical, etc. ) that transforms the signal resulting from the interaction of the analyte with the biological element into another signal (i.

, transducers) that can be more easily measured and quantified;associated electronics or signal processors that are primarily responsible for the display of the results in a user-friendly way.

This sometimes accounts for the most expensive part of the sensor device, however it is possible to generate a user friendly display that includes transducer and sensitive element. How sensors collect information (associated electronics and signal processors), organize it effectively and transmit to the base for real-time processing is an important problem.

This research is focused on various Data aggregation and scheduling techniques. Topic 5: Computer forensic and investigationCryptography helps people to achieve confidentiality, integrity and authenticity while communicating with unknown (or known) people over the unknown network.

Intrusion detection System (IDS) is a way to detect intrusion through histories. However, once an attacker hacked into a network or computer, it is necessary to make a thorough study on what information the attackers are looking for and how to collect evidences for prosecution.

The process is more OS dependant; the process also depends on the software used. This introductory project on Computer Forensic and crime investigation, aims at making a good revision on various techniques available in the literature, establish their strengths and weaknesses and propose a suitable improvements.

(Students opting for project must have strong mathematical aptitude and strong programming and OS skills).