jueves, 24 de mayo de 2012

Ultima Semana

Lo sucedido con el proyecto.

Lo que nos falto con todo este proyecto, fue mas trabajo en equipo dejar de lado las diferencias que tenemos unos de otros y ser mas comunicativos en los equipos que estaban formados, en mi opinion si hubieramos hablado desde el principio o asistido a reuniones todos los equipo seria otra historia.

Tambien se realizo un cluster con lam e interfaz grafica de manera loca aqui el link en la wiki :


Pedro Miguel
Juan Carlos

domingo, 20 de mayo de 2012

Petri Net

Carl Petri established in 1962, a mathematical tool for studying communications with controllers. Some of the most important applications of Petri Nets have been in the modeling and analysis of communication protocols, modeling and analysis of manufacturing systems. In this area, have been used to represent production lines, automated assembly lines, automotive production system, flexible manufacturing systems, systems just-in-time, etc..

Petri Nets are bipartite graphs consisting of three types of objects:
  • Places.
  • Transitions.
  • Oriented arcs

Petri nets are considered a tool for the study of systems. With your help we can model the behavior and structure of a system, and carry the model boundary conditions, which in a real system are difficult to get or very expensive.
The theory of Petri nets has become recognized as an established methodology in the literature of robotics to model flexible manufacturing systems.
Compared with other models of dynamic behavior charts, diagrams and finite state machines, Petri nets offer a way to express processes that require synchronization. And perhaps most importantly, Petri nets can be analyzed formally and learn the dynamic behavior of the modeled system.
To model a system using mathematical representations making an abstraction of the system, this is achieved with Petri nets, which can also be studied as automata and investigate its mathematical properties. 

Areas of application

  • Data Analysis
  • Software design
  • Reliability
  • Workflow
  • Concurrent programming


It has a single line to serve 100 clients. The arrival times of customers are successive values ​​of the random variable t, the service times are given by the random variable ts, and N is the number of servers. This model in its initial state the queue is empty and all servers in standby. The Petri net for this scenario is shown in Figure

The states are labeled with capital letters and lowercase transitions. The labels of the sites will also be used as variables whose values ​​are the tokens.

The edges have labels that could represent the transition functions, which specify the number of tokens removed or added when a transition is enabled.
The state A initially has the arrival of 100 customers, the site B prevents clients come more than once, the site is the row Q by customers when they have to wait to be answered. The state S is where the servers idle waiting for the opportunity to work, and site E counts the number of customers leaving the system. The initial state implies that the sites to have the following values:

  • A = 100
  • B = 1
  • Q = 0
  • S = N
  • E = 0 

The transition model is used for customers entering the system and the transition b models to customers when they are being served.
Classical Petri nets to model states, events, and conditions, synchronization, and parallelism, among other system features. However, Petri networks that describe the real world tend to be complex and extremely large. Furthermore, conventional Petri nets do not allow the modeling of data and time. To solve this problem, many extensions have been proposed. Extensions can be grouped into three categories: time extensions, color and hierarchies.

An important point in real systems is the description of the temporal behavior of the system. Because Casicas Petri nets are not able to manage time in a `` quantitative'' is added to the model the concept of time. Petri nets with time can be divided, in turn, into two classes: time Petri nets deterministic (or regular Petri nets) and Time Petri Nets Stochastic (or stochastic Petri nets).
The family of Petri nets with deterministic time include Petri nets associated with a given trip time in their transitions, places, or arcs.

The family of Petri nets with time include stochastic Petri nets associated with a trip time stochastic transitions, places and arcs.

Because this type of network associated with a delay time to execute a transition enabled, you can not establish a reachability tree because evolution is not deterministic. The analysis methods associated with such networks are Markov chains, where the probability of the emergence of a new state depends only on the previous state.

The size problem with Petri nets when modeling real systems, can be treated with the use of hierarchical Petri nets. These networks provide, as its name implies, a hierarchy of subnetworks. A subnet is a set of places, transitions, arcs and even subnets. So that the construction of a large system, is based on a mechanism for structuring two or more processes, represented by subnets. Such that, at one level gives a simple description of the processes, and on another level we want to give a more detailed description of their behavior.
Extensions from hierarchies appeared as extensions to colored Petri nets However, we have developed types of hierarchical Petri nets that are not colored

Colored Petri nets are an extension of Petri nets that has built a modeling language. Colored Petri nets are considered a modeling language developed for systems in which, communication, synchronization and resource sharing are important. Thus, colored Petri nets combine the advantage of Petri nets and the classical languages ​​of high level programming. To make this statement clear, we will list the features of the graphical elements of such networks.



Scalability is the ability of a system, network, or process, to handle growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth. For example, it can refer to the capability of a system to increase total throughput under an increased load when resources (typically hardware) are added. An analogous meaning is implied when the word is used in a commercial context, where scalability of a company implies that the underlying business model offers the potential for economic growth within the company.


Scalability, as a property of systems, is generally difficult to define and in any particular case it is necessary to define the specific requirements for scalability on those dimensions that are deemed important. It is a highly significant issue in electronics systems, databases, routers, and networking. A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system. An algorithm, design, networking protocol, program, or other system is said to scale, if it is suitably efficient and practical when applied to large situations (e.g. a large input data set or a large number of participating nodes in the case of a distributed system). If the design fails when the quantity increases, it does not scale.

In information technology, scalability (frequently spelled scaleability) seems to have two usages:
1) It is the ability of a computer application or product (hardware or software) to continue to function well when it (or its context) is changed in size or volume in order to meet a user need. Typically, the rescaling is to a larger size or volume. The rescaling can be of the product itself (for example, a line of computer systems of different sizes in terms of storage, RAM, and so forth) or in the scalable object's movement to a new context (for example, a new operating system).

2) It is the ability not only to function well in the rescaled situation, but to actually take full advantage of it. For example, an application program would be scalable if it could be moved from a smaller to a larger operating system and take full advantage of the larger operating system in terms of performance(user response time and so forth) and the larger number of users that could be handled.


jueves, 17 de mayo de 2012

Semana 15

La aportacion en esta semana fue trabajar mas sobre pvm con mis compañero Alejandro, Esteban y jonathan
aqui el link:

mis nominacion son para:

Esteban sifuentes
Pedro Miguel

sábado, 12 de mayo de 2012

Grid Computing

Grid computing is an innovative technology that enables a coordinated use of all resources (including computing, storage and specific applications) that are not subject to centralized control. In this sense it is a new form of distributed computing, where resources can be heterogeneous (different architectures, supercomputers, clusters ...) and are connected by wide area networks (for example Internet). Developed in scientific fields in the early 1990s, they entered the commercial market following the idea of ​​so-called Utility computing is a major revolution. The term grid refers to an infrastructure that allows integration and collective use of high performance computers, networks and databases that are owned and managed by different institutions.

Called grid computing system for sharing resources distributed not geographically centered for solving large scale. Shared resources can be computers (PCs, workstations, supercomputers, PDAs, laptops, phones, etc.), software, data and information, special tools (radio, telescopes, etc..) Or people / employees.
Grid computing offers many advantages over alternative technologies. The power that offer a multitude of networked computers using grid is virtually unlimited, as well as providing seamless integration of heterogeneous systems and devices, so that the connections between different machines will not generate any problems. This is a highly scalable, powerful and flexible, thus bypassing problems of lack of resources (bottlenecks) and never becomes obsolete, because of the possibility of modifying the number and characteristics of its components.
These resources are distributed across the network transparently but keeping safety guidelines and policies for managing both technical and economic. So, your goal will be to share a number of online resources in a uniform, secure, transparent, efficient and reliable, providing a single point of access to a set of geographically distributed resources in different management domains. This can lead us to believe that Grid computing enables the creation of virtual enterprises. It is important to know that a grid is a distributed set of machines that help improve the software work on heavy.

Some features of grid computing are:
- Use of work on network architecture and well-defined protocols.
- You can group virtual teams around the world.
- Requires access controls and security.
- Various institutions can pool their resources to get results.
- The Grid teams located less used and assigned tasks.
- Not all problems can be solved with GRID.

Moreover, this technology gives companies the benefit of speed, giving it a competitive advantage, thereby providing improved times for the production of new products and services.

Makes it easy to share, access and manage information through collaboration and operational flexibility, combining technological resources not only diverse, but also people and different skills. Another aspect is that it tends to increase productivity by giving end users access to computing resources, storage and data they need, when needed.

With respect to security in the grid, it is supported with the "intergrids" where that security is the same Lan network offering which is used on grid technology.

The parallel may be seen as a problem, since a parallel machine is very expensive. But if we have availability of a set of heterogeneous machines small or medium size, whose combined computing power is sufficiently large, distributed systems that would generate very low cost, high computing power.

Grid computing needs to maintain its structure, different services such as Internet connections, 24 hours, 365 days, bandwidth, server capacity, security, VPN, firewalls, encryption, secure communications, security policies, ISO and some more features ... Without all these functions and features can not speak of Grid Computing.

Fault tolerance means that if one of the machines that make up the grid collapses, the system recognizes it and the task is forwarded to another machine, which is fulfilled in order to create flexible and robust operational infrastructures.

Applications of Grid Computing

Currently, there are five general applications for Grid Computing:

Super computing.

Applications are those whose needs can not be met in a single node. The needs occur at certain instants of time and resource intensive.

Real-time distributed systems.

These are applications that generate a flow of high-speed data to be analyzed and processed in real time.

Specific services.

This does not take into account the computing power and storage capacity, but the resources that an organization can be considered unnecessary. Grid represents the organization resources.

Data intensive process.

These are applications that make heavy use of storage space. Such applications go beyond the storage capacity of a single node and the data are distributed throughout the grid. In addition to the benefits due to the increase of space, the data distribution along the grid allows access to them in a distributed manner.

Collaborative virtual environments.

Area associated with the concept of tele-immersion, so as to use the vast resources of grid computing and distributed nature to produce distributed 3D virtual environments.


jueves, 10 de mayo de 2012

Semana 14

La aportacion de la semana fue trabajar con mis compañeros alejandro y jonathan con lo de PVM


Nomino a

Pedro: http://pedrito-martinez.blogspot.mx/2012/05/benchmarks.html

Osvaldo: http://4imedio.blogspot.mx/2012/05/reporte-semana-14-paralelos.html

por estas aportaciones.


In computer terms a benchmark is an application designed to measure the performance of a computer or any element thereof. For this purpose, the machine undergoes a series of workloads or stimuli of different type with the intention of measuring its response to them. This can be estimated under what tasks or stimuli a given computer behaves in a reliable and effective or otherwise shown inefficient. This information is very useful when selecting a machine to perform specific tasks in the post process and creation of audiovisual products, choosing the most appropriate for a given process.

The benchmark is also useful to estimate the level of obsolescence of a system or what technical performance can be improved through updates.
On the other hand, the benchmark can provide us all the technical specifications along with a computer's performance to different stimuli allowing for comparisons between different systems in line with their technical specifications and performance. The comparisons are useful in determining which technical characteristics are ideal for optimal performance in a specific task. A comparison between multiple computers from different manufacturers (with different specifications) allows us to determine a priori which are more suitable for certain applications and which ones are better for others.

Early in the 90's definition requires a supercomputer to produce comparable statistics. After experimenting with various metrics based on the number of processors in 1992, the idea of ​​using a list of production systems as a basis for comparison, at the University of Mannheim.

A year later, Jack Dongarra joins the project with the Linpack benchmark. In May 2003, establishing the first trial based on data published on the Internet:
  • Statistics Series supercomputers at the University of Mannheim (1986-1992)
  • List of sites world's most powerful computer, maintained by Gunter Ahrendt
  • Lots of information gathered by David Kahaner
To measure the power of systems using the HPL benchmark, a portable version of the Linpack benchmark for distributed memory computers.
Note that the list is not based on GRID computing systems or the MDGRAPE-3 supercomputer, which reaches a Petaflop being more powerful than any of the systems included in the list, you can not run benchmarking software used to not be a general-purpose supercomputer.
All lists published since the beginning of the project are posted on the website of the project, so it makes no sense to copy that information elsewhere.

Machines that have occupied the number 1

K_computer Fujitsu (Japan, June 2011 - present)
NUDT Tianhe-1A (China, November 2010 - June 2011)
Cray Jaguar (USA, November 2009 - November 2010)
IBM RoadRunner (USA, June 2008 - November 2009)
IBM Blue Gene / L (U.S., November 2004 - June 2008)
NEC Earth Simulator (Japan, June 2002 - November 2004)

jueves, 3 de mayo de 2012


A little introduction to supercomputers, with some of history

A supercomputer is the type of computer most powerful and fastest available at this time. How are you machines are designed to process huge amounts of information in a short time and are dedicated to a specific task, its application or use of the individual escapes rather engage in:
1. Search oilfield seismic large databases.
2. The study and prediction of tornadoes.
3. The study and prediction of anywhere in the world.
4. The development of models and projects the creation of aircraft, flight simulators.

We must also add that the supercomputers are a relatively new technology, so their use is not overcrowded and is sensitive to changes. It is for this reason that the price is very high with over 30 million dollars and the number of production per year is small. Supercomputers are the sort of more powerful computers and faster than existing in a given time. Are large, the largest among its peers. Can process huge amounts of information in a short time and can perform millions of instructions per second, are aimed at a specific task and have a very large storage capacity. They are also the most expensive having a costoque can exceed $ 30 million. Because of its high cost are produced very little for a year, there are even some that are made only on request.

They have a special temperature control in order to dissipate the heat that some components can reach. It acts as the arbiter of all applications and controls access to all files, so does the input and output operations. The user is directed to the organization's central computer processing support when required.
They are designed for multiprocessing systems, the CPU is the processing center and can support thousands of users online. The number of processors that can have a supercomputer depends mainly on the model, can have from about 16 to 512 processors (such as the NEC SX-4 1997) and more.

The Manchester Mark I
The first supercomputer British laid the foundation for many concepts still used today.
The digital world's first computer, electronic and program-loaded (top figure) successfully executed its first program on June 21, 1948. This program was written by Tom Kilburn who, along with the late FC (Freddie) Williams designed and built the Manchester Mark I computer This machine, the prototype Mark I, quickly became known as' The Baby Machine 'or just `The Baby'.
In modern terms The Baby had a RAM (random access memory) for only 32 positions or 'words'. Each word consisting of 32 bits (binary digits), which means that the machine had a total of 1024 bits of memory.
The RAM technology was based on cathode ray tube (CRT). CRTs were used to store data bits as charged areas on the phosphor screen, showing as an incandescent series of dots on it. The CRT's electron beam can efficiently handle this load and write a 1 or 0 and then read it as requested. Freddie Williams led the investigation that perfected the use of CRT storage with Tom Kilburn made decisive contributions.

The Cray 1
The Cray 1 was the first supercomputer "modern".
The first Cray-1 in England was located at Daresbury Laboratory for two years before being moved to the University of London.
The first major success in the design of a modern supercomputer was the Cray-1 which was introduced in 1976. One reason why the Cray-1 was so successful was that I could make more than one hundred million arithmetic operations per second (100 MFLOP / s). It was designed by Seymour Cray who left Control Data Corporation in 1970 to create his own company called Cray Research Inc., founded in 1972. If you today, following a conventional process, you will try to find a computer in the same speed using PCs, you would need to connect 200 of them, or you could just buy 33 Sun4s. Cray Reasearch Inc. made at least 16 of its fabulous Cray 1. A typical Cray-1 in 1976 cost over 700,000 dollars. The good thing is that you could instruct the machine in any color deseases, which still remains.

IBM - SP2 computer
The IBM SP2 installed on the Daresbury Laboratory, has, since an upgrade this year, 24 nodes P2SC (Super Chip), plus another 2 processors width oldest node located in two racks. Only the second of which is shown in the picture. Each node has a clock of 120 MHz and 128 Mbytes of memory. Two New High Performance Switch (TV3) are used to connect nodes together. Data storage is 40 GB of fast disk locally attached Ethernet and FDDI networks for user access.
An individual maximum node performance of 480 Mflops offers a total of over 12 Gflops for the entire machine.
A workstation is connected to PowerPC RS/6000 SP2 system for monitoring and management of hardware and software.



Semana 13

La aportacion de la semana fue una investigacion sobre el  algoritmo de mineria de datos, peticion de mi compañero avendaño, cabe destacar que se puede aplicar en clustering para hacer redes neuronales y donde me gustaria darle mas importancia a este algoritmo y donde mas se interreso mi compañero avendaño.


mis nominaciones son para:
jonathan alvarado, Alejandor Avendaño y Osvaldo Hinojosa.

lunes, 30 de abril de 2012

Semana 12

La aportacion de esta semana fue hablar un poco sobre hamachi, que es y como funciona y un poco de su seguridad, cabe destacar que seria una muy buen herramienta para el cluster.


Mis nominaciones son para:
Alejandro Avendaño, Eduardo Triana y Adriana Contreras por la aprotacion:

Communication in distributed systems

The most important difference between a distributed system and a single processor systemis the communication between processes. In a single-processor communication implicitly assumes the existence of shared memory:
• Ex: problem of producers and consumers, where a process writes to a shared buffer and anotherprocess reads from it.
In a distributed system there is no shared memoryand thus the whole nature of communicationbetween processes must be rethought. Processes to communicate, they must adhere to rules known as protocols. For distributed systems over a wide area, these protocols often take the form of several layers and each layer has its own goals and rules. Messages are exchanged in various ways, thereare many design options in this regard, an important option is the "remote procedure call." It is also important to consider the possibilities of communication between groups of processes, not only between two processes. In distributed systems, the absence of physical connection between the different memories of the teams, communication is performed by message transfer.

The standard ISO OSI
To send messages using the standard ISO OSI (Open Systems Interconnection), a layered model for communication of open systems. The layers provide multiple interfaces with different levels of detail, the last being the most general. The OSI standard defines the following seven layers: physical, data link, network, transport, session, presentation and application. The OSI model distinguishes two types of protocols, connection oriented and connectionless protocols. In the first, before any data transmission requires a virtual connection, which must end after sending. Connectionless protocols do not require this step, and messages are sent as datagrams.

Asynchronous transmission mode ATMAsynchronous transfer mode or ATM provides a quick means of transmission. Higher speeds are achieved regardless of the information flow control and error control in the intermediate nodes of the transmission. ATM uses connection-oriented mode, and allows transmission of different types of information such as voice, video, data, etc..
The client-server model of communication based on a simplification of the OSI model. The seven layers that provides produce a waste of the transfer speed of the network, which will be used only three layers: physical, data link and request / reply. Transfers are based on the protocol request / response and eliminates the need for connection.
Group communication
Communication systems that support groups can be divided into two categories: closed groups andopen groups. In the first only allowed communication between group members,while in the latter processes that do not belong to the group can interact with it. Closed groups are often used, in general, parallel processing, where each member has its own objective and does not interact with the `` outside world''. Instead, open groups are intended for application developmentclient / server, where the processes interact with the group to apply for services.

We can also categorize groups of processestaking into account the role played by the membersthat compose them: peer groups and hierarchical groups. If in a group there is nodistinction between the processes that formed anddecisions are taken collectively, is said to be apeer group. However, if relations are establishedhierarchy in a group, where some processes have a greater say than others, we speak of hierarchical groups. Each of these organizations group has its advantages and disadvantages. In hierarchical groups, if they fail the processes that make decisions, can produce a mismatch of the other members for as long as this situation remains. Inthe peer group does not produce this effect, if it fails one of its members can continue working (witha smaller group).


jueves, 19 de abril de 2012

semana 11

La aportacion de esta sema fue un poco de informacion de lo que se puede utilizar en pyMPI. Una complementacion de lo que hizo mi compañero Alejandro Avendaño.


Mis nominaciones son para:
Osvaldo Hinojosa
Alejandro Avendaño
Jonathan Alvarado

por las aportaciones de esta semana.

lunes, 16 de abril de 2012

Semana 10

El aporte de esta semana son unos programas en c del mergesort

y aqui el merge sort con threads no estoy seguro si esta hecho de una buena forma

Mis nominaciones son para:
Jose Gonzales, Alejandro Avendaño y Eduardo Triana

jueves, 5 de abril de 2012

semana 9

La aportacion de esta semana, hice referencia a los posibles fallos que puede presentar un sistema distribuido ya que como en esta semana se ve lo que es la tolerancia a fallos y en el laboratorio describi de manera general este tema.

El link es el siguiente:

Mis nominaciones son a:
Jose Gonzales 
Jonathan Alvarado

Tolerancia a fallos

Tolerancias a fallos (en inglés failover, suele confundírse con el término concurrencia) se determina a la capacidad de un sistema de almacenamiento de acceder a información aún en caso de producirse algún fallo. Esta falla puede deberse a daños físicos (mal funcionamiento) en uno o más componentes de hardware lo que produce la pérdida de información almacenada. La tolerancia a fallos requiere para su implementación que el sistema de almacenamiento guarde la misma información en más de un componente de hardware o en una máquina o dispositivo externos a modo de respaldo. De esta forma, si se produce alguna falla con una consecuente pérdida de datos, el sistema debe ser capaz de acceder a toda la información recuperando los datos faltantes desde algún respaldo disponible.

Los sistemas de almacenamiento con tolerancia a fallos son vitales en ambientes donde se trabaje con información crítica, como el caso de los bancos, entidades gubernamentales, algunas empresas, etc. El nivel de tolerancia a fallos dependerá de la técnica de almacenamiento utilizada y de la cantidad de veces que la información está duplicada, sin embargo, la tolerancia nunca es del 100% puesto que si fallan todos los "mirrors" disponibles, incluyendo el origen, los datos quedan incompletos por lo tanto la información se leerá corrupta.

La tolerancia a fallos es un aspecto crítico para aplicaciones a gran escala, ya que aquellas simulaciones que pueden tardar del orden de varios días o semanas para ofrecer resultados deben tener la posibilidad de manejar cierto tipo de fallos del sistema o de alguna tarea de la aplicación.

Sin la capacidad de detectar fallos y recuperarse de estos, dichas simulaciones pueden no llegar a completarse. Es más, algunos tipos de aplicaciones requieren ser ejecutadas en un entorno tolerante a fallos debido al nivel de seguridad requeridos.




jueves, 29 de marzo de 2012

Semana 8

La aportacion de esta semana fue una complementacion del tema grid computing hecho por mi compañero Eduadrdo Triana.


Me gustaria que dieran puntos a:

Jose Guadalupe Gonzalez Hernandez por su aportacion.

Replicacion y Consistencia

Los datos en nuestro sistema consisten en una colección de elementos llamados objetos. Un objeto podría ser un archivo o un objeto en Java. La replicación de datos es utilizada para el mantenimiento automático de copias de datos en múltiples computadoras, por ejemplo el almacenamiento de datos provenientes de servidores Web en la memoria caché.

Es una técnica para la mejora de los servicios que ofrecen los sistemas distribuidos porque proporciona una mejora del rendimiento de los servicios e incrementa su disponibilidad y lo hace tolerante a fallos.

Su objetivo es que las operaciones de los clientes sobre las réplicas se realicen de forma consistente y con un tiempo de respuesta y un caudal satisfactorio. La consistencia no se logra algunas veces debido a que las réplicas de un cierto objeto no son necesariamente idénticas, al menos no en cada instante de tiempo particular. Algunas réplicas pueden haber recibido actualizaciones que otras no hayan recibido.

Por que replicar?
- Continuidad de trabajo ante fallas
- Mayor cantidad de copias mejor protección contra corrupción de datos


- Escalabilidad en número
- Escalabilidad en área geográfica (menor tiempo de acceso a copias cercanas)
- Consulta simultánea de datos

La consistencia de los datos es definida entre el programador y el sistema, que garantiza que si el programador sigue las reglas, la memoria será consistente y el resultado de las operaciones de memoria será predecible.

Los lenguajes de alto nivel, tales como C, C++ y Java, respetan parcialmente este modelo traduciendo operaciones de memoria en operaciones de bajo nivel para preservar la memoria semántica. Para mantener el modelo, los compiladores pueden reordenar algunas instrucciones de memoria, y las llamadas a las bibliotecas como pthread_mutex_lock(), encapsular la sincronización necesaria.

En general no es aceptable que distintos clientes obtengan diferentes resultados al acceder a los mismos datos. Como mínimo no es aceptable si el resultado lleva a una inconsistencia detectable y significativa entre diferentes aplicaciones o incluso dentro de una misma aplicación.
Los ejemplos incluyen:
  • Linealizable (también conocido como el estricta o consistencia atómica)
  • Consistencia secuencial:
       - El resultado de una ejecución es el mismo si todas las operaciones (lectura y escritura) de todos los procesos sobre el dato fueran ejecutadas en algún orden secuencial y las operaciones de cada proceso individual aparecen en esta secuencia en el orden especificado por su programa.
  • Consistencia de causalidad
  • Consistencia liberada
  • Consistencia eventual
  • Consistencia delta
  • Consistencia PRAM (también conocido como consistencia FIFO)
  • Consistencia débil
    - La consistencia débil asegura consistencia sobre un grupo de operaciones, no sobre lecturas o escrituras aisladas.
  • Consistencia vector campo


lunes, 12 de marzo de 2012

semana 5

La Simulación en Paralelo puede ejecutarse bajo diferentes arquitecturas, bien sea en redes de computadoras o en los denominados clousters de procesadores, que consisten en grupos de microprocesadores interconectados, que pueden ser utilizados como una computadora paralela. De una forma u otra,la Simulación en Paralelo es una técnica que permite distribuir una gran carga computacional entre muchos procesadores o computadoras para facilitar la solución de un problema.

Las aplicaciones que se desarrollan bajo el procesamiento en paralelo son cada vez mas eficientes, flexibles y escalables; a la vez que son abiertas a nuevas tecnologías y desarrollos computacionales y al ser implantados en clousters, permiten una codificación ordenada y robusta, dando con ello una alta eficiencia en la adaptación del código a nuevos requerimientos, como en la ejecución del mismo. De forma tal que esta metodología permite tener a disposición de quien lo requiera una gama de herramientas flexibles y escalables para coadyuvar de forma eficiente y adaptable a la solución de problemas en medios continuos de forma sistemática.

De Simulación en paralelo de tráfico aéreo en todo el mundo

La congestión y demora son los atributos comunes de los sistemas de transporte modernos. En sus pronósticos de largo alcance, la Administración Federal de Aviación (FAA) ha proyectado un crecimiento del 50% en las operaciones de aeronaves comerciales entre ahora y el año 2020.

MITRE ha desarrollado recientemente una herramienta de simulación de rápido tiempo capaz de computación relacionados con la congestión, los retrasos. La herramienta, denominada Herramienta de Evaluación Detallada (DPAT), se basa en un motor de simulación paralela de eventos discretos, que utiliza la tecnología informática optimistas para lograr ultra-rápidos tiempos de ejecución. DPAT es capaz de calcular los plazos de alrededor de 400.000 operaciones de control de tránsito aéreo (despegue, aterrizaje, o el sector transferencia) en menos de un minuto en una de cuatro procesadores, 300 MHz estación de trabajo Sun SPARC. Estos tiempos de ejecución rápidos permiten usos diferentes para DPAT: como una capacidad de evaluación rápida para los estudios de respuesta rápida, como un motor para investigar cientos o miles de variaciones de parámetros relativos a los estudios de sensibilidad más amplios, o como una herramienta de apoyo a las decisiones de las decisiones de la aviación en tiempo real.

Investigadores de simulación en paralelo han ideado técnicas inteligentes para manejar estos casos, basándose en un rollback, y mensaje de cancelación. "Volver" se está restableciendo el reloj de la segunda entidad de simulación hacia atrás para que pueda procesar correctamente el mensaje rezagado a finales de la entidad más lenta en primer lugar. "Cancelación de mensajes" es la técnica utilizada para anular cualquier evento que la segunda entidad publicados antes de la llegada del mensaje rezagado.


jueves, 1 de marzo de 2012

Reporte 5

En esta ocasion la aportacion que se dio fue algunas introducciones a temas de:
Para dejar en claro que son XD


jueves, 23 de febrero de 2012

Reporte 4

En esta ocasion se creo un usuario llamado mpi en cada uno de los nodos y que es necesario que cada uno de los nodos tengan el mismo usuario para poder compartir diferentes tipos de archivo incluyendo el mpich2.


Nomino a Jose Guadalupe Gonzalez Hernandez por su aportacion.

Processes and Concurrency - high degree of parallelism

A process is an abstraction of a running program, consisting of executable code, a data section containing global variables, a stack or pile section containing temporary data, such assubroutine parameters, return addresses andtemporary variables , and the state of processor registers. The program corresponds to a passive entity, whereas the process corresponds to an active entity.

In multiprogrammed systems, time-sharingoperating on a computer with a processorproduces the phenomenon known as concurrentprocessing. This phenomenon, is that the CPUswitches the execution of processes in fixed portions of time, this phenomenon is known as Seudoparalelismo, in the sense that in the eyesof the users own processes, the execution ofparallel processes is , this illusion is caused bythe fixed time system that assigns each of the processes is very small and therefore difficult to be perceived by man. When the computer system is equipped with multiple processors, then he can make what is called parallelprocessing, this implies that parallelism is a form of computing in which many calculations can be performed simultaneously, based on the principle of dividing big problems for get severalsmall problems, which are then solved in parallel.There are several different types of parallelism:bit level, education level, and task data.

The concurrency includes a large number of design issues, including inter-process communication, comparison and competition for resources, synchronization of the execution of several processes and allocation of processor time to processes and is essential for there designs as Multiprogramming, multithreaded and distributed process.

Processes are concurrent if exist simultaneously.When two or more processes arrive while running, it is said that there has been a concurrent processes. It is noteworthy that for two or more processes are competing, they must have some relation between them The crowd can be in three different contexts:
  • Multiple Applications: The multiprogramming was created to allow the processor time the machine was shared dynamically between multiple jobs or active applications.
  • Structured Applications: As an extension of the principles of modular design and structured programming, some applications can be implemented effectively as a set of concurrent processes.
  • Structure of the operating system: The same advantages apply to structuring systems programmers and has been shown that some operating systems are implemented as a set of processes. There are three computer models that can run the concurrent processes:
  • Multiprogramming with a single processor. The operating system is responsible for distributing to go processor time between processes, inserting the execution thereof to thereby give an appearance of simultaneous execution.
  • Multiprocessor. It is a machine consisting of a set of processors sharing main memory. In such architectures, concurrent processes can not only insert their implementation but also superimposed.
  • Multicomputadora. Is a distributed memory machine, which is formed by a series of computers. In this type of architecture is also possible the simultaneous execution of processes on different processors.

Provides application development by allowing them to be structured as a set of processes that cooperate to achieve a common goal.
Speeds up the calculations. If you want a task to run faster, so you can do is break it down into processes, each of which runs in parallel with others.
It enables the interactive use multiple users working simultaneously.
Allows better use of resources, especially CPU, they can take advantage of input-output stages of a process to perform other processing stages.
  • Starvation, and termination
  • Occurrence of locks
  • That two or more processes require the same resource
  • types of concurrent processes.
  • Processes running concurrently in a system can be classified as:
  • Separate process: It is one that runs without requiring the assistance or cooperation from other processes. A clear example of independent processes are the different shells that run simultaneously on a system.
Processes are cooperating: Those that are designed to work together in an activity, for whatmust be able to communicate and interact withthem.

The three basic states are:
1. Running: The process is using the CPU, the instructions are being executed.
2. Ready: The process is able to run but is waiting to be assigned CPU time.
3. Blocked: Process is waiting for an external event occurs (such as completion of an I / S).

The process can take any of the aforementioned states and the transition from one to another state is explained. The following are the possible transitions:
1. Running - Blocked: A process moves fromimplementation to execute a lock when itinvolves waiting for an event, for example, you expect to happen efectivamnete I / S, wait for the activation of a traffic light, etc..
2. Implementation - Done: A process ofexecution passes ready, when you run out oftime allocated by the process scheduler of the system at this time the system must allocate the processor to another process.
3. Ready - Implementation: A process moves from ready to run when the system gives CPUtime.
4. Locked - Ready: A process moves fromblocked to ready when the event happensoutside waiting.

jueves, 16 de febrero de 2012

Reporte 3

In this week i try to do the mergesort but i had some problems, but i show the results.

As you can see the program dont sorts the numbers very well.

i try to fix the program the next week

jueves, 9 de febrero de 2012

Entrega 2

my contribution was only to identify what are the applications for a cluster
the link is:

My next contribution will make it the merge sort in Java

I'd like to give extra points to Eduardo Triana for his contribution

MPI y POSIX (lab)

What is MPI?
Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computers. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in Fortran 77 or the C programming language. Several well-tested and efficient implementations of MPI include some that are free and in the public domain. These fostered the development of a parallel software industry, and there encouraged development of portable and scalable large-scale parallel applications.
The MPI interface is meant to provide essential virtual topology, synchronization, and communication functionality between a set of processes (that have been mapped to nodes/servers/computer instances) in a language-independent way, with language-specific syntax (bindings), plus a few language-specific features. MPI programs always work with processes, but programmers commonly refer to the processes as processors. Typically, for maximum performance, each CPU (or core in a multi-core machine) will be assigned just a single process. This assignment happens at runtime through the agent that starts the MPI program, normally called mpirun or mpiexec.
What is POSIX?
Short for "Portable Operating System Interface for uni-X", POSIX is a set of standards codified by the IEEE and issued by ANSI and ISO. The goal of POSIX is to ease the task of cross-platform software development by establishing a set of guidelines for operating system vendors to follow. Ideally, a developer should have to write a program only once to run on all POSIX-compliant systems. Most modern commercial Unix implementations and many free ones are POSIX compliant. There are actually several different POSIX releases, but the most important are POSIX.1 and POSIX.2, which define system calls and command-line interface, respectively.
The POSIX specifications describe an operating system that is similar to, but not necessarily the same as, Unix. Though POSIX is heavily based on the BSD and System V releases, non-Unix systems such as Microsoft's Windows NT and IBM's OpenEdition MVS are POSIX compliant.

miércoles, 1 de febrero de 2012

Aporte semana 1

El aporte fue principalmente buscar explicar que es un cluster ya que muchos de mis compañeros y amigos no sabian lo que era hasta que se les explicaba.

-Un cluster es un grupo de equipos independientes que ejecutan una serie de aplicaciones de forma conjunta y aparecen ante clientes y aplicaciones como un solo sistema.

Tambien se menciono:
-Clusters implementados
-Ventajas  e inconvenientes

link: http://elisa.dyndns-web.com/progra/Cluster