jueves, 23 de febrero de 2012

Reporte 4

En esta ocasion se creo un usuario llamado mpi en cada uno de los nodos y que es necesario que cada uno de los nodos tengan el mismo usuario para poder compartir diferentes tipos de archivo incluyendo el mpich2.

link
http://elisa.dyndns-web.com/progra/cluster.

Nomino a Jose Guadalupe Gonzalez Hernandez por su aportacion.

Processes and Concurrency - high degree of parallelism

A process is an abstraction of a running program, consisting of executable code, a data section containing global variables, a stack or pile section containing temporary data, such assubroutine parameters, return addresses andtemporary variables , and the state of processor registers. The program corresponds to a passive entity, whereas the process corresponds to an active entity.

In multiprogrammed systems, time-sharingoperating on a computer with a processorproduces the phenomenon known as concurrentprocessing. This phenomenon, is that the CPUswitches the execution of processes in fixed portions of time, this phenomenon is known as Seudoparalelismo, in the sense that in the eyesof the users own processes, the execution ofparallel processes is , this illusion is caused bythe fixed time system that assigns each of the processes is very small and therefore difficult to be perceived by man. When the computer system is equipped with multiple processors, then he can make what is called parallelprocessing, this implies that parallelism is a form of computing in which many calculations can be performed simultaneously, based on the principle of dividing big problems for get severalsmall problems, which are then solved in parallel.There are several different types of parallelism:bit level, education level, and task data.


The concurrency includes a large number of design issues, including inter-process communication, comparison and competition for resources, synchronization of the execution of several processes and allocation of processor time to processes and is essential for there designs as Multiprogramming, multithreaded and distributed process.

Processes are concurrent if exist simultaneously.When two or more processes arrive while running, it is said that there has been a concurrent processes. It is noteworthy that for two or more processes are competing, they must have some relation between them The crowd can be in three different contexts:
  • Multiple Applications: The multiprogramming was created to allow the processor time the machine was shared dynamically between multiple jobs or active applications.
  • Structured Applications: As an extension of the principles of modular design and structured programming, some applications can be implemented effectively as a set of concurrent processes.
  • Structure of the operating system: The same advantages apply to structuring systems programmers and has been shown that some operating systems are implemented as a set of processes. There are three computer models that can run the concurrent processes:
  • Multiprogramming with a single processor. The operating system is responsible for distributing to go processor time between processes, inserting the execution thereof to thereby give an appearance of simultaneous execution.
  • Multiprocessor. It is a machine consisting of a set of processors sharing main memory. In such architectures, concurrent processes can not only insert their implementation but also superimposed.
  • Multicomputadora. Is a distributed memory machine, which is formed by a series of computers. In this type of architecture is also possible the simultaneous execution of processes on different processors.

Advantages.
Provides application development by allowing them to be structured as a set of processes that cooperate to achieve a common goal.
Speeds up the calculations. If you want a task to run faster, so you can do is break it down into processes, each of which runs in parallel with others.
It enables the interactive use multiple users working simultaneously.
Allows better use of resources, especially CPU, they can take advantage of input-output stages of a process to perform other processing stages.
Disadvantages
  • Starvation, and termination
  • Occurrence of locks
  • That two or more processes require the same resource
  • types of concurrent processes.
  • Processes running concurrently in a system can be classified as:
  • Separate process: It is one that runs without requiring the assistance or cooperation from other processes. A clear example of independent processes are the different shells that run simultaneously on a system.
Processes are cooperating: Those that are designed to work together in an activity, for whatmust be able to communicate and interact withthem.

The three basic states are:
1. Running: The process is using the CPU, the instructions are being executed.
2. Ready: The process is able to run but is waiting to be assigned CPU time.
3. Blocked: Process is waiting for an external event occurs (such as completion of an I / S).

The process can take any of the aforementioned states and the transition from one to another state is explained. The following are the possible transitions:
1. Running - Blocked: A process moves fromimplementation to execute a lock when itinvolves waiting for an event, for example, you expect to happen efectivamnete I / S, wait for the activation of a traffic light, etc..
2. Implementation - Done: A process ofexecution passes ready, when you run out oftime allocated by the process scheduler of the system at this time the system must allocate the processor to another process.
3. Ready - Implementation: A process moves from ready to run when the system gives CPUtime.
4. Locked - Ready: A process moves fromblocked to ready when the event happensoutside waiting.

jueves, 16 de febrero de 2012

Reporte 3

In this week i try to do the mergesort but i had some problems, but i show the results.




As you can see the program dont sorts the numbers very well.

i try to fix the program the next week

jueves, 9 de febrero de 2012

Entrega 2


my contribution was only to identify what are the applications for a cluster
the link is:




My next contribution will make it the merge sort in Java

I'd like to give extra points to Eduardo Triana for his contribution

MPI y POSIX (lab)

What is MPI?
Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computers. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in Fortran 77 or the C programming language. Several well-tested and efficient implementations of MPI include some that are free and in the public domain. These fostered the development of a parallel software industry, and there encouraged development of portable and scalable large-scale parallel applications.
Functionality
The MPI interface is meant to provide essential virtual topology, synchronization, and communication functionality between a set of processes (that have been mapped to nodes/servers/computer instances) in a language-independent way, with language-specific syntax (bindings), plus a few language-specific features. MPI programs always work with processes, but programmers commonly refer to the processes as processors. Typically, for maximum performance, each CPU (or core in a multi-core machine) will be assigned just a single process. This assignment happens at runtime through the agent that starts the MPI program, normally called mpirun or mpiexec.
What is POSIX?
Short for "Portable Operating System Interface for uni-X", POSIX is a set of standards codified by the IEEE and issued by ANSI and ISO. The goal of POSIX is to ease the task of cross-platform software development by establishing a set of guidelines for operating system vendors to follow. Ideally, a developer should have to write a program only once to run on all POSIX-compliant systems. Most modern commercial Unix implementations and many free ones are POSIX compliant. There are actually several different POSIX releases, but the most important are POSIX.1 and POSIX.2, which define system calls and command-line interface, respectively.
The POSIX specifications describe an operating system that is similar to, but not necessarily the same as, Unix. Though POSIX is heavily based on the BSD and System V releases, non-Unix systems such as Microsoft's Windows NT and IBM's OpenEdition MVS are POSIX compliant.

miércoles, 1 de febrero de 2012

Aporte semana 1

El aporte fue principalmente buscar explicar que es un cluster ya que muchos de mis compañeros y amigos no sabian lo que era hasta que se les explicaba.

-Un cluster es un grupo de equipos independientes que ejecutan una serie de aplicaciones de forma conjunta y aparecen ante clientes y aplicaciones como un solo sistema.

Tambien se menciono:
-Tipos
-Componentes
-Clusters implementados
-Ventajas  e inconvenientes


link: http://elisa.dyndns-web.com/progra/Cluster