The problem how to identify prediction models of the indoor climate in buildings is discussed. Identification experiments have been carried out in two buildings and different models, such as linear ARX-, ARMAX- and BJ-models as well as non-linear artificial neural network models (ANN-models) of different orders, have been identified based on these experiments. In the models, many different input signals have been used, such as the outdoor and indoor temperature, heating power, wall temperatures, ventilation flow rate, time of day and sun radiation. For both buildings, it is shown that ANN-models give more accurate temperature predictions than linear models. For the first building, it is shown that a non-linear combination of sun radiation and time of day is important when predicting the indoor temperature. For the second building, it is shown that the indoor temperature is non-linearly dependent on the ventilation flow rate. © Springer-Verlag London Limited 2006.
In this paper, an approach to weighting features for classification based on the nearest-neighbour rules is proposed. The weights are adaptive in the sense that the weight values are different in various regions of the feature space. The values of the weights are found by performing a random search in the weight space. A correct classification rate is the criterion maximised during the search. Experimentally, we have shown that the proposed approach is useful for classification. The weight values obtained during the experiments show that the importance of features may be different in different regions of the feature space
This paper is concerned with the offset lithographic colour printing. To obtain high quality colour prints, given proportions of cyan (C), magenta (M), yellow (Y), and black (K) inks (four primary inks used in the printing process) should be accurately maintained in any area of the printed picture. To accomplish the task, the press operator needs to measure the printed result for assessing the proportions and use the measurement results to reduce the colour deviations. Specially designed colour bars are usually printed to enable the measurements. This paper presents an approach to estimate the proportions directly in colour pictures without using any dedicated areas. The proportions—the average amount of C, M, Y, and K inks in the area of interest—are estimated from the CCD colour camera RGB (L*a*b*) values recorded from that area. The local kernel ridge regression and the support vector regression are combined for obtaining the desired mapping L*a*b* ⇒ CMYK, which can be multi-valued.
This paper presents an approach to determining the colours of specks in an image of a pulp being recycled. The task is solved through colour classification by an artificial neural network. The network is trained using fuzzy possibilistic target values. The number of colour classes found in the images is determined through the self-organising process in the two-dimensional self-organising map. The experiments performed have shown that the colour classification results correspond well with human perception of the colours of the specks.
This paper presents a neural networks based method and a system for colour measurements on printed halftone multicoloured pictures and halftone multicoloured bars in newspapers. The measured values, called a colour vector, are used by the operator controlling the printing process to make appropriate ink feed adjustments to compensate for colour deviations of the picture being measured from the desired print. By the colour vector concept, we mean the CMY or CMYK (cyan, magenta, yellow and black) vector, which lives in the three- or four-dimensional space of printing inks. Two factors contribute to values of the vector components, namely the percentage of the area covered by cyan, magenta, yellow and black inks (tonal values) and ink densities. Values of the colour vector components increase if tonal values or ink densities rise, and vice versa. If some reference values of the colour vector components are set from a desired print, then after an appropriate calibration, the colour vector measured on an actual halftone multicoloured area directly shows how much the operator needs to raise or lower the cyan, magenta, yellow and black ink densities to compensate for colour deviation from the desired print. The 18 months experience of the use of the system in the printing shop witnesses its usefulness through the improved quality of multicoloured pictures, the reduced consumption of inks and, therefore, less severe problems of smearing and printing through.
This paper presents a hierarchical modular neural network for colour classification in graphic arts, capable of distinguishing among very Similar colour classes. The network performs analysis in a rough to fine fashion, and is able to achieve a high average classification speed and a low classification error. In the rough stage of the analysis, clusters of highly overlapping colour classes are detected Discrimination between such colour classes is performed in the next stage by using additional colour information from the surroundings of the pixel being classified. Committees of networks make decisions in the next stage. Outputs of members of the committees are adaptively fused through the BADD defuzzification strategy or the discrete Choquet fuzzy integral. The structure of the network is automatically established during the training process. Experimental investigations show the capability of the network to distinguish among very similar colour classes that can occur in multicoloured printed pictures. The classification accuracy obtained is sufficient for the network to be used for inspecting the quality of multicoloured prints.
Few-shot meta-learning involves training a model on multiple tasks to enable it to efficiently adapt to new, previously unseen tasks with only a limited number of samples. However, current meta-learning methods assume that all tasks are closely related and belong to a common domain, whereas in practice, tasks can be highly diverse and originate from multiple domains, resulting in a multimodal task distribution. This poses a challenge for existing methods as they struggle to learn a shared representation that can be easily adapted to all tasks within the distribution. To address this challenge, we propose a meta-learning framework that can handle multimodal task distributions by conditioning the model on the current task, resulting in a faster adaptation. Our proposed method learns to encode each task and generate task embeddings that modulate the model’s activations. The resulting modulated model becomes specialized for the current task and leads to more effective adaptation. Our framework is designed to work in a realistic setting where the mode from which a task is sampled is unknown. Nonetheless, we also explore the possibility of incorporating auxiliary information, such as the task-mode-label, to further enhance the performance of our method if such information is available. We evaluate our proposed framework on various few-shot regression and image classification tasks, demonstrating its superiority over other state-of-the-art meta-learning methods. The results highlight the benefits of learning to embed task-specific information in the model to guide the adaptation when tasks are sampled from a multimodal distribution. © The Author(s) 2024.