Random Quote

Change is the essential process of all existence.

— Mr. Spock

Stack Exchange

profile for Andrea Girardi on Stack Exchange, a network of free, community-driven Q&A sites

Java: Pass By Value or Pass By Reference

Data is shared between functions by passing parameters. Now, there are 2 ways of passing parameters:

  • Passing by value: this method copies the value of actual parameter. The called function creates its own copy of argument value and the use it inside the code. As soon as the work is done on the copied value, the original parameter does not see any change.
  • Passing by reference: the pass by reference method passes the parameters as a reference(address) of the original variable. The called function does not create its own copy, rather, it refers to the original values only. Hence, the changes made in the called function will be reflected in the original parameter as well.

Java follows the following rules in storing variables:

  • Local variables like primitives and object references are created on Stack memory
  • Objects are created on Heap memory

Java always passes arguments by value, NOT by reference.

So how is it that anyone can be at all confused by this, and believe that Java is pass by reference, or think they have an example of Java acting as pass by reference? The key point is that Java never provides direct access to the values of objects themselves, in any circumstances. The only access to objects is through a reference to that object. Because Java objects are always accessed through a reference, rather than directly, it is common to talk about fields and variables and method arguments as being objects, when pedantically they are only references to objectsThe confusion stems from this (strictly speaking, incorrect) change in nomenclature.

So, when calling a method:

  • For primitive arguments (intlong, etc.), the pass by value is the actual value of the primitive (for example, 3)
  • For objects, the pass by value is the value of the reference to the object

So if you have doSomething(foo) and public void doSomething(Foo foo) { .. } the two Foos have copied references that point to the same objects.

Naturally, passing by value a reference to an object looks very much like (and is indistinguishable in practice from) passing an object by reference.

Return Distinct Values for an Array Field

The following example returns the distinct values for the field sizes from all documents in the Collection1 collection:

db.getCollection('inventory').distinct('storeCode')

This will be the expected result:

[
     "1502", "1002", "747"
]

Distinct, as for relational database, finds the distinct values for a specified field across a single collection or view and returns the results in an array.

Algorithm complexity and Big O notation

Every system transform data into output and that’s why is important to understand the efficiency of our algorithms and data structures according to each solution. 

Big O Notation measures the efficiency of an algorithm according to its time and space complexities. The input size of a function can increase linearly and we should be aware of what’s is going to happen with the system in the worst-case scenario. Time complexity is denoted O(…) where the three dots represent some function. Usually, the variable n denotes the input size.

The commonest runtime complexities are: 

  • O(1) Constant Runtime: the running time of a constant-time algorithm does not depend on the input size
  • O(log n) A logarithmic algorithm often halves the input size at each step. The running time of such an algorithm is logarithmic because log2 n equals the number of times n must be divided by 2 to get 1.
  • O(n) A linear algorithm goes through the input a constant number of times.
  • O(n log n) This time complexity often indicates that the algorithm sorts the input because the time complexity of efficient sorting algorithms is O(n log n). Another possibility is that the algorithm uses a * data structure where each operation takes O(log n) time
  • O(n^2) A quadratic algorithm often contains two nested loops
  • O(n^3) A cubic algorithm often contains three nested loops
  • O(2^n) This time complexity often indicates that the algorithm iterates through all subsets of the input elements. For example, the subsets of {1, 2, 3} are ∅, {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, and {1, 2, 3}
  • O(n!) This time complexity often indicates that the algorithm iterates through all permutations of the input elements. * For example, the permutations of {1, 2, 3} are (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), and (3,2,1).

And why should we care about Big O Notation? For large applications that manipulate a large amount of data, it’s really important to perform this analysis since inefficient algorithms will impact the performance of the system.  Be aware of the effect of the data structures that you’re using in an algorithm since each one manipulates data in memory and performs operations in different ways.

NP-hard problems are an important set of problems, for which no polynomial algorithm is known.

SARS-CoV-2 e Disinfezione delle strade

TL;DR (se avete fretta, cosa che è assolutamente errata quando ci si deve informare, saltate subito alla parte finale con scritto MORALE).

Torno sull’argomento. Fin da subito mi sono schierato contro questa pratica inquinate e pericolosa ricevendo anche qualche critica ma, ho troppo rispetto per la mia comunità per prendere decisioni che accontentano e portano facile consenso sebbene non siano corrette.

Amministrare vuol dire anche prendere decisioni difficili e avere il coraggio di spiegarle, farle capire portando dati a suffragio e portarle avanti. Dopotutto, quante decisioni strategiche per la nazione sono state sacrificate sull’altare del consenso elettorale?

Non è stato un capriccio il mio, ma semplicemente mi sono informato sul sito di IIS, OMS, ho parlato con esperti, ho letto tanti articoli scientifici (e non Donna Moderna), ho contattato la Federation of American Scientists per un parere, e grazie ai miei genitori e ai miei sforzi, ho costruito delle solide basi scientifiche che mi permettono di avere opinioni che siano basate su fatti, dati, osservazioni e non su emozioni o impressioni.

Purtroppo la scienza è antidemocratica, impopolare, non può essere compresa da tutti, alla scienza non importa dell’opinione del singolo, ma la scienza ti salva la vita. E la scienza, ha fin da subito detto che spruzzare disinfettante per le strade è inutile e dannoso per combattere una pandemia.

Seguite SEMPRE la scienza. Chiunque creda di fornire verità alternative che la scienza vuole censurare è solo un cialtrone, non un novello Galileo. Come disse qualcuno più importante di me, per essere un novello Galileo non bisogna solo dire il contrario di ciò che sostengono gli altri, bisogna pure avere ragione ma soprattutto dimostrarlo sulla base del metodo scientifico.

MORALE: per quanto esposto qui sopra, sui seguenti link, a Minerbe (VR) NON VERRA’ FATTA LA DISINFEZIONE DELLE STRADE in quanto vi sono evidenze a supporto dell’utilità della disinfezione con prodotti chimici pericolosi, come l’ipoclorito di sodio di strade e pavimentazioni esterne. Tutto il resto è solo show elettorale.

[1] https://www.epicentro.iss.it/coronavirus/pdf/rapporto-covid-19-7-2020.pdf

[2] https://www.sciencemag.org/news/2020/03/does-disinfecting-surfaces-really-prevent-spread-coronavirus?fbclid=IwAR313l6shJ9oluX9gyT0T9sswuyjVXGKk15gSxmdGOemc6DA3lMPnKr3Tcc

Spring 4+ with Ehcache 3.x

This post describes an example of using Ehcache with a Spring MVC application deployed on Tomcat (not using Spring boot). It is a legacy app that needs to be upgraded.

The dependencies are:

<dependency>
    <groupId>javax.cache</groupId>
    <artifactId>cache-api</artifactId>
    <version>1.1.1</version>
</dependency>
<dependency>
    <groupId>org.ehcache</groupId>
    <artifactId>ehcache</artifactId>
    <version>3.8.1</version>
</dependency> 

Application context must be updated in this way:

<!-- ***** CACHE CONFIGURATION v.3 ***** -->
<cache:annotation-driven cache-manager="ehCacheManager" />
<bean id="ehCacheManager" class="org.springframework.cache.jcache.JCacheCacheManager">
  <property name="cacheManager">
    	<bean class="org.springframework.cache.jcache.JCacheManagerFactoryBean" 
        p:cacheManagerUri="classpath:ehcache.xml" />
  </property>
</bean>

The method must be annotated with @Cacheable so that Spring will handle the caching. As a result of this annotation, Spring will create a proxy of the NumberService to intercept calls to the square method and call Ehcache.

This is how to annotate the method (a service or a Dao implementation) providing the cache alias and the key for the cache:

@Cacheable(value = "retrieveUserIdOfMYGroup", key = "#userId")
public ArrayList<Integer> retrieveUserIdOfMYGroup(int userId) {
    [...]
}

Now, ehcache.xml config that is completely different than the previous version of ehcache (this is a simple config):

<?xml version="1.0" encoding="UTF-8"?>
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="http://www.ehcache.org/v3"
    xmlns:jsr107="http://www.ehcache.org/v3/jsr107"
    xsi:schemaLocation="
            http://www.ehcache.org/v3 http://www.ehcache.org/schema/ehcache-core-3.0.xsd
            http://www.ehcache.org/v3/jsr107 http://www.ehcache.org/schema/ehcache-107-ext-3.0.xsd">
	
	<cache-template name="myDefaults">
	<listeners>
            <listener>
                <class>com.afm.web.configuration.CacheLogger</class>
                <event-firing-mode>ASYNCHRONOUS</event-firing-mode>
                <event-ordering-mode>UNORDERED</event-ordering-mode>
                <events-to-fire-on>CREATED</events-to-fire-on>
                <events-to-fire-on>EXPIRED</events-to-fire-on>
                <events-to-fire-on>EVICTED</events-to-fire-on>
            </listener>
        </listeners>        
	</cache-template>
	
	<!-- @Cacheable(value = "retrieveUserIdOfMYGroup", key = "#userId") -->
	<cache alias="retrieveUserIdOfMYGroup" uses-template="myDefaults">
		<heap unit="entries">200</heap>
	</cache>
            
</config>

Cache listeners allow implementers to register callback methods that will be executed when a cache event occurs and print on the log appender. This is how class CacheLogger is implemented:

public class CacheLogger implements CacheEventListener<Object, Object> {

  protected final Log LOG = LogFactory.getLog(getClass());
	
	@Override
	public void onEvent(CacheEvent<? extends Object, ? extends Object> cacheEvent) {
		LOG.info("Key: " + cacheEvent.getKey()  
      + " | EventType: " + cacheEvent.getType() 
      + " | Old value: " + cacheEvent.getOldValue() 
      + " | New value: " + cacheEvent.getNewValue());		
	}

}

React.js: fill options of Autocomplete with API results

The autocomplete is a normal text input enhanced by a panel of suggested options.  Autocomplete is a feature that helps in predicting the rest of the word typed by a user. Autocomplete is helpful from the user as well as the user experience perspective. It makes users happy by saving their time and also by offering them several choices.

In this case, I fill the Autocomplete with the results of Google Ads locations an this is the expected result:

For first, import Autocomplete component (of course must be installed):

import Autocomplete from '@material-ui/lab/Autocomplete';
// or
import { Autocomplete } from '@material-ui/lab';

Define the object that will be used to collect the Autocomplete options:

const locationResults = [];

This is how Autocomplete is defined:

let autocompleteBox = <Grid item xs={12} md={12} sm={12} lg={12}>
    <Autocomplete
        id="autocompleteLocations"
        value={this.state.autocompleteValue}
        options={locationResults} 
        getOptionLabel={option => option.locationName}      
        autoHighlight={true}   
        autoSelect={true}                    
        style={{ width: 600, marginTop: 20 }}                        
        clearOnEscape                        
        onChange={(event, autocompleteValue) => this.setState({ autocompleteValue })}
        renderInput={params => <TextField {...params} label="Search location" variant="outlined" onChange={this.handleAutocompleteTextChange.bind(this)} />}
    />

    </Grid>;

Once the user enters some text on Search location TextField, this function is called (I expect user inserts at least 3 chars before call the API):

/**
 * On change text of Autocomplete
 */
handleAutocompleteTextChange = (event) => {

    this.setState({
        query: event.target.value
    }, () => {

        if (this.state.query && this.state.query.length > 3) {
            this.getLocations(this.state.query);
        }
    })
}

This is how the function getLocations() is implemented:

/*
 * API Function to retrieve locations
 */
getLocations = (locationQuery) => {

    axios(
        {
            method : 'get', url: GET_LOCATIONS, auth: this.state.userInfo.apiAuth, params: {
            user: this.state.userInfo.username,
            customerId: this.state.clientCustomerId,
            query: locationQuery
        }
    }).then(res => {

        if (res.status == 200) {
            this.setState({results: res},()=>{
                this.forceReloadOrganization(res)
            });
        } else {
            this.setState({results: []});
        }

    }).catch(error => {
        this.setState({results: []});
        console.log(JSON.stringify(error));
    });

}

The last thing is to update the options on Autocomplete:

/**
 * Reload the results after API call
 */
forceReloadOrganization = (results) => {
    {

        if ( results.data != "" ) {
            results.data.map(item => {

                locationResults.push({
                        id:item.location.id,
                        type: item.location.displayType,
                        locationName:item.canonicalName + ": " + item.location.displayType + " (" + item.location.id + ")"
                    }
                )
            });
        }

    }
}

Mongo Replica set with docker-compose

replica set in MongoDB is a group of mongod processes that maintain the same data set. Replica sets provide redundancy and high availability and are the basis for all production deployments. Replication provides redundancy and increases data availability. With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server. A replica set contains several data bearing nodes and optionally one arbiter node. Of the data-bearing nodes, one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes.

The primary node receives all write operations. A replica set can have only one primary capable of confirming writes with { w: "majority" } write concern. By default, clients read from the primary [1]; however, clients can specify a read preference to send read operations to secondaries.

Doing it with docker-compose is pretty simple. The first step is to create the docker-compose.yml configuration file:

version: "3"
services:
  mongo1:
    hostname: mongo1
    container_name: localmongo1
    image: mongo:latest
    volumes:
      - mongodb1-data:/data/db
      - mongodb1-config:/data/configdb
    expose:
    - 27017
    ports:
      - 27011:27017
    restart: always
    entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
  mongo2:
    hostname: mongo2
    container_name: localmongo2
    image: mongo:latest
    volumes:
      - mongodb2-data:/data/db
      - mongodb2-config:/data/configdb
    expose:
    - 27017
    ports:
    - 27012:27017
    restart: always
    entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
  mongo3:
    hostname: mongo3
    container_name: localmongo3
    image: mongo:latest
    volumes:
      - mongodb3-data:/data/db
      - mongodb3-config:/data/configdb
    expose:
    - 27017
    ports:
    - 27013:27017
    restart: always
    entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]

volumes:
  mongodb1-data: {}
  mongodb1-config: {}
  mongodb2-data: {}
  mongodb2-config: {}
  mongodb3-data: {}
  mongodb3-config: {}
  proftpd-home: {}

At this point, docker must be started:

$ docker-compose up 
# or
$ docker-compose up -d

At this point, enter in one mongo bash and access to mongo console:

$ docker exec -it localmongo1 /bin/bash
$ mongo

The last step is to run the DB replica set initialization:

rs.initiate(
  {
    _id : 'rs0',
    members: [
      { _id : 0, host : "mongo1:27017" },
      { _id : 1, host : "mongo2:27017" },
      { _id : 2, host : "mongo3:27017" }
    ]
  }
)

Now, mongo is ready to accept a connection on port 27011 and, as soon as a DB / collection / document is created or updated, it will be replicated on secondary servers.

Java: Sort a list of objects according to matching string/pattern

I need to sort a list of objects in which the matching items come up and others will go down so. For instance, a list of objects on which all the labels are in alphabetical order except for all the values that start with P that will be put on the top of the list.

I just need to create a new Comparator class instead of creating an anonymous Comparator, like this:

Collections.sort(result, new Comparator< MyObject>() {
    @Override
    public int compare(final MyObject o1, final MyObject o2) {
 
        // Special case to put "P" in front of the list
        if(o1.getLabel().startsWith("P")) {
            return o1.getLabel().startsWith("P}")? o1.getLabel().compareTo(o2.getLabel()): -1;
        } else {
            return o2.getLabel().startsWith("P")? 1: o1.getLabel().compareTo(o2.getLabel());
        }
 
    }
});

On which MyObject is defined in this way:

class MyObject {
    private String label;
    private String value;
    
    // getter an setter
}

Global warming e inquinamento in genere

2020.01.10 – CO2 concentration, 285 ppbv in Minerbe (Italy), not great not terrible

L’ecologia mi sta a cuore, il pericolo che corriamo è concreto. Sensibilizzare le masse su tale pericolo è fondamentale e lottare perché si trovi una soluzione è quantomeno doveroso. È giusto però puntualizzare alcune cose.

È facile andare in piazza e gridare cosa non ci stia bene; no nucleare, no trivelle, no global warming, no esperimenti sulla fusione, no OGM, no impianti che rovinano i paesaggi, no polveri sottili, no traffico, not TAV, no autostrade, no auto elettriche perché le batterie inquinano e l’energia per produrla ha generato inquinamento (perché le auto a diesel o benzina, che fanno?), no pannelli fotovoltaici perché tra 20 anni saranno da smaltire, viva la natura! Che i potenti risolvano il global warming, per Dio, il pianeta è uno solo e non si scherza. 

È facile fare appelli collettivi; quando però si tratta di rinunciare noi a qualcosa, di ragionare nell’ottica del singolo, tutti i nodi vengono al pettine. 

La nostra vita è fatta d’energia, che non si crea per grazia divina; non parlo di spiritualità, ma di crudi bilanci in Joule ed una manciata (più o meno) di formule fisiche che tengono in piedi l’universo.

Il cibo che portiamo in tavola comporta un dispendio di risorse e si, anche i cibi vegani, dopotutto al supermercato mica ci arrivano da soli. I nostri spostamenti sono un dispendio di risorse (e di tempo! Non dimenticate mai di considerare il tempo come una risorsa). I nostri dispositivi elettronici da cui dipendiamo sono un dispendio di risorse, pensate a quanto intasate i server con i meme di Messi che si guarda spaesato che spedite o quando consultate i profili degli influencer, è sempre energia che viene consumata. La nostra casa sempre calda, o sempre fresca, o sempre illuminata domanda in continuazione risorse. Questo dispendio, questo benessere e crescita economica ha arrecato danni al nostro pianeta e continua a farlo ogni singolo secondo, giorno dopo giorno, mese dopo mese, anno dopo anno.

Sperate di soddisfare questi bisogni da un giorno all’altro con sola energia pulita – in tempi sufficientemente brevi da scongiurare il disastro? Non succederà. È brutale, ma è così. Non é tecnicamente possibile, è utopico. La nostra crisi ambientale si risolverà solo con compromessi da parte di tutti, non basterà sottostare agli accordi di Parigi. Non fatevi illusioni, non esistono bacchette magiche; potete manifestare quanto volete, ma se davvero credete in quello che fate, preparatevi a cambiare. 

Credetemi: quando decidemmo di rinunciare democraticamente al nucleare, nessuno di noi avrebbe però rinunciato alla luce in casa. (considero la scelta di rinunciare al nucleare figlia della follia collettiva e dell’incapacità dei nostri governati di prendere una decisione importante senza guardare al mero consenso elettorale). 

Quando affermate di voler interrompere il global warming, pensate di poter allo stesso tempo rinunciare alla vostra carne, alla vostra frutta esotica, alle vostre 4 auto per famiglia, al riscaldamento sempre acceso e al resto delle vostre comodità? 

O produciamo energia, o non avremo crescita e benessere e torneremo all’età della pietra. O consumiamo risorse in modo proporzionale alla capacità della produzione energetica pulita, o sporcheremo. O smettiamo di sporcare (o comunque limitiamo in modo deciso il processo), o comprometteremo il pianeta.

Decidete voi dove inserire il compromesso in questo loop.

(Post riarrangiato dal sottoscritto a partire da un post preso da Internet di cui non ricordo la fonte ma che ringrazio infinitamente per la saggezza e la semplicità con cui ha espresso questi concetti [se mi leggi, palesati!]. Btw, no, non è di Greta Thunberg)

Import SQL file using pgAdmin

I have a docker Postgres image and I want to import the data from another Postgres db. The first thing I have done is to create a pg_dump on the remote server and then I have tried to import it. The problem is that output generated is simple SQL file and, if I import this file on pgAdmin I get an error:

pg_restore: [archiver] input file appears to be a text format dump. Please use psql.

psql is not installed on my Mac because I am running it as a docker image and the file exported is using COPY to import values instead of INSERT.
The solution I have found is to export the db using –column-inserts flag:

$ pg_dump --column-inserts -U user db_test > db_test.2020-01-02_insert.sql

–column-inserts will dump as insert commands with column names.